Standard search works by matching keywords. So, when a user searches for the best wineries in Napa Valley, traditional search won’t find an article titled Top Vineyards to Visit — even though they clearly mean the same thing. That’s exactly where the keyword gap appears. However, this is also where Laravel semantic search truly shines. Instead of relying on exact keyword matches, Laravel 13 introduces a smarter approach. In fact, it closes that gap by understanding the meaning behind the search, not just the words themselves.
Even better, Laravel 13 now ships with native vector search support directly inside the query builder. So, there’s no need for Pinecone, and more importantly, no need for a separate Python service. All you need is PostgreSQL, the pgvector extension, and just a few lines of clean Laravel code.
Now, let’s build it step by step.
What You Need
Before you start, make sure you have:
- Laravel 13 installed
- PostgreSQL as your database driver
- Laravel AI SDK —
composer require laravel/ai - An API key from an embedding provider (OpenAI, Anthropic, etc.)
Note: Laravel pgvector support only works with PostgreSQL. It is not available on MySQL or SQLite in Laravel 13.
Step 1 — Set Up the Vector Column in Your Migration
First, enable the pgvector extension and add a vector column. Now, to add a column, firstly, you will need a migration. Hence, we will create a new migration for articles.
php artisan make:model Article -m
The above command will create a model and a migration. Now, with the model and migration in place, let’s go ahead and define the schema in the migration.
Laravel 13 makes this very simple:
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
Schema::ensureVectorExtensionExists();
Schema::create('articles', function (Blueprint $table) {
$table->id();
$table->string('title');
$table->text('body');
$table->vector('embedding', dimensions: 1536)->index(); // 1536 for OpenAI
$table->timestamps();
});
The Schema::ensureVectorExtensionExists() call enables the pgvector extension on your PostgreSQL database. The ->index() chain automatically creates an HNSW index. That index dramatically speeds up Laravel vector search on large datasets.
Respectively, you will have to update the fillable property in the model.
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
class Article extends Model
{
protected fillable = ['title', 'body', 'embedding'];
}
Step 2 — Cast the Vector Column on Your Model
Next, cast the embedding column to an array on your Eloquent model. This lets Laravel automatically convert between PHP arrays and the database’s vector format:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
class Article extends Model
{
protected function casts(): array
{
return [
'embedding' => 'array',
];
}
}
That’s all the model configuration needed for Laravel semantic search to work.
Step 3 — Generate and Store Embeddings
In order to generate and store an embedding, let’s create a controller first.
php artisan make:controller ArticleController
When you create or update an article, generate an embedding for its content and store it. Use the toEmbeddings() method available on Laravel’s Stringable class:
use Illuminate\Support\Str;
use App\Models\Article;
$article = Article::create([
'title' => 'Top Vineyards to Visit in California',
'body' => 'California wine country has world-class vineyards...',
]);
// Generate embedding from the article body
$embedding = Str::of($article->body)->toEmbeddings();
// Store it
$article->update(['embedding' => $embedding]);
For bulk processing — like seeding your entire articles table. Firstly, create a seeder using the below artisan command.
php artisan make:seeder ArticleEmbeddingSeeder
Now, after having the seeder, use the Embeddings class. It is far more efficient because it sends a single API request instead of one per item:
use Laravel\Ai\Embeddings;
use App\Models\Article;
$articles = Article::whereNull('embedding')->get();
$response = Embeddings::for($articles->pluck('body')->all())->generate();
$articles->each(function ($article, $index) use ($response) {
$article->update(['embedding' => $response->embeddings[$index]]);
});
This approach makes it very practical to set up Laravel pgvector on an existing dataset.
Step 4 — Run a Semantic Search
Now comes the exciting part. Once embeddings are stored, Laravel semantic search is just one query away.
So, let’s create another controller where we will use this vector search.
php artisan make:controller SearchController
Now, in this controller, let’s use this method Use whereVectorSimilarTo() to search by meaning:
use App\Models\Article;
$results = Article::whereVectorSimilarTo('embedding', 'best wineries in Napa Valley')
->limit(10)
->get();
That’s it. Under the hood, Laravel converts the search string into an embedding using the AI SDK. Then it performs a cosine similarity search against stored vectors using whereVectorSimilarTo Laravel. Finally, it returns the most semantically relevant results — even if they share zero exact words with the query.
You can also set a minimum similarity threshold. This filters out weak matches:
$results = Article::whereVectorSimilarTo(
column: 'embedding',
value: 'best wineries in Napa Valley',
minSimilarity: 0.7 // 0.0 to 1.0 — higher means stricter match
)
->limit(10)
->get();
A threshold of 0.7 works well for most Laravel vector search use cases. Adjust it based on your content and how strict you want the matching to be.
Bonus — Combine Full-Text Search with AI Reranking
Laravel semantic search and full-text search work great together. Use full-text search for speed. Then use AI reranking for semantic relevance.
$articles = Article::whereFullText('body', $request->query)
->limit(50)
->get()
->rerank('body', $request->query, limit: 10);
Here is what happens. First, PostgreSQL full-text search quickly retrieves 50 candidates. Then the AI reranker — powered by Cohere or Jina — scores each one by semantic relevance. Finally, it returns only the top 10 most meaningful results.
This gives you both speed and accuracy. It is the best approach for production Laravel semantic search features.
Real-World Use Cases
Laravel vector search is not just for blog articles. Here are some common use cases developers are building right now:
- Product search — customers find products by describing them in natural language instead of guessing exact keywords
- Documentation search — users find relevant docs even when they use different terminology
- Support ticket matching — automatically find similar past tickets when a new one is submitted
- Content recommendations — suggest related articles based on meaning, not just tags
Each of these was previously complex to build. With whereVectorSimilarTo Laravel, it now takes under 30 minutes to set up.
Final Thoughts
Laravel semantic search is one of the most impactful features in Laravel 13. It transforms how users interact with your application’s data. Moreover, the implementation is surprisingly simple. A migration, a model cast, an embedding step, and one query method. That’s the entire Laravel vector search setup from scratch. Additionally, because it runs directly on PostgreSQL with Laravel pgvector, you do not need any external services. Everything stays in your existing stack.
If you are building anything with a search feature in 2026, this is the upgrade worth making.
Laravel 13 semantic search, Laravel Vector Search, whereVectorSimilarTo, Laravel pgvector, Laravel AI search, Laravel embeddings, Laravel 13 new features, vector search Laravel 2026

Leave a Reply