Try an interactive version of this dialog: Sign up at solve.it.com, click Upload, and pass this URL.

Note: 18

Exploring the SQLite database searching options in the LLM world

Note: 10

Overview : Comparing search options

Note: 472

Vector search vs Keyword search vs LLM-gemerated SQL statement

Aspect Vector Search Keyword Search (FTS5) LLM-Generated SQL
Semantic understanding ✅ Finds similar meaning even with different words ❌ Exact/partial word matches only ✅ Can expand synonyms (IoT, smart home, connected home)
Spelling tolerance ✅ Handles typos/variations ❌ Typos break matches ⚠️ Depends on LLM interpretation
Speed ⚠️ Slower (you saw the timeout!) ✅ Very fast ⚠️ LLM API latency (~1-2 sec)
Setup complexity ⚠️ Embeddings needed upfront ✅ Simple virtual table ✅ Just a prompt
Cost 💰 Embedding model (one-time) 💰 Free 💰💰 API call per query
Predictability ✅ Deterministic ✅ Deterministic ❌ LLM can vary
Explainability ⚠️ "Distance: 0.23" — not intuitive ✅ Clear word matches ✅ You see the SQL
Storage overhead ⚠️ Large (384 floats per row) ⚠️ Moderate (index) ✅ None
Multilingual ✅ Works across languages ❌ Language-specific ⚠️ Depends on prompt

Note: 12

A spider diagram of the table above

Output: 96

Note: 16

Lets decide the libray and how to use it

Note: 583

The aim of this section of dialouge aim is explore SQLite vector search options, I've found a couple of libaries and I would like to see how do us them:

https://github.com/asg017/sqlite-vec https://github.com/sqliteai/sqlite-vector

Here's a quick comparison of the two libraries:

Aspect sqlite-vec sqlite-vector
Stars/Maturity ~6.5k ⭐, 15 contributors, Mozilla-sponsored ~438 ⭐, 5 contributors, by SQLite Cloud
License MIT/Apache-2.0 (permissive) Elastic License 2.0 (⚠️ restrictions for production/managed services)
Architecture Uses vec0 virtual tables Stores vectors as BLOBs in ordinary tables
Python install pip install sqlite-vec pip install sqliteai-vector
Vector types float32, int8, binary float32, float16, bfloat16, int8, uint8
Distance metrics L2 (cosine via normalization) L2, L1, Cosine, Dot Product, Squared L2
Indexing Requires some setup for KNN queries No pre-indexing required
Memory Not specified Claims ~30MB default
Keyword search Not built-in (can combine with FTS5) Not built-in

Key observations:

  1. sqlite-vec is more mature, better funded, and has a permissive license — safer for commercial use
  2. sqlite-vector has a simpler "no virtual tables" approach and more distance metrics built-in, but the Elastic License may be problematic for production

Note: 376

SQLite Vector Search with sqlite-vec

Exploring hybrid search for customer support ticket retrieval — combining semantic vector search with keyword matching.

Key Concepts:

  • sqlite-vec uses vec0 virtual tables to store vector embeddings alongside regular SQLite tables
  • Vectors are stored separately and linked by id to the source data (tickets table)
  • Multiple vector columns supported per table (e.g., separate embeddings for title vs description)

Retrieval Options:

  • Vector search: L2 distance by default; normalize vectors before storing for cosine similarity
  • Keyword search: FTS5 with BM25 ranking for exact/prefix/phrase matching
  • Hybrid: Combine both — keywords for precision, vectors for handling typos, synonyms, and semantic similarity

Why Cosine Similarity for Tickets:

Ticket length varies significantly. Cosine similarity compares direction (meaning) rather than magnitude (length), making it ideal for matching short and long tickets with similar intent.

Normalization:

Copied!
def normalize(vector):
    norm = np.linalg.norm(vector)
    return vector / norm if norm > 0 else vector

Normalize embeddings before insertion → L2 distance then behaves as cosine similarity.

Note: 7

Import customer service data

Note: 24

first we will import data from hugging face to polars and then to SQLlite

Note: 555

The data chosen data is synthetic customer support ticket dataset created by Tobias Bueck (from Softoft), designed for training AI models to handle IT helpdesk scenarios.

What it contains:

The dataset has approximately 61,800 tickets in English and German, featuring labeled customer emails with corresponding support agent responses.

Key fields in each ticket:

Field Description
subject Email subject line
body Customer's full email text
answer Agent's response
type Incident, Request, Problem, or Change
queue Routing department (Technical Support, Billing, IT Support, etc.)
priority low, medium, high
language en or de
tag_1 to tag_8 Classification tags like "Network", "Security", "Bug", etc.

The 10 queues (departments): Technical Support, Customer Service, Billing and Payments, Product Support, IT Support, Returns and Exchanges, Sales and Pre-Sales, Human Resources, Service Outages and Maintenance, and General Inquiry.

Designed use cases: Text classification (routing tickets to correct departments), priority prediction (identifying urgent issues), and customer support analysis (understanding common problems).

Important notes:

  • This is synthetic data with no PII (personally identifiable information)
  • Licensed under CC-BY-NC-4.0 (non-commercial use)
  • Created by the maker of Open Ticket AI, an open-source helpdesk classification system

This is ideal for your vector search experiments because tickets have varied lengths, multiple classification labels, and both a query (body) and response (answer) you could embed separately.

Code: 9

!pip install polars 

Output: 117

Requirement already satisfied: polars in /app/data/.local/lib/python3.12/site-packages (1.36.1)
Requirement already satisfied: polars-runtime-32==1.36.1 in /app/data/.local/lib/python3.12/site-packages (from polars) (1.36.1)

Code: 7

import polars as pl

Code: 7

!pip install pyarrow

Output: 55

Requirement already satisfied: pyarrow in /app/data/.local/lib/python3.12/site-packages (22.0.0)

Code: 90

import polars as pl

# Login using e.g. `huggingface-cli login` to access this dataset
df = pl.read_csv('hf://datasets/Tobi-Bueck/customer-support-tickets/**/aa_dataset-tickets-multi-lang-5-2-50-version.csv')

Code: 1

df

Output: 2,829

shape: (28_587, 16)
subjectbodyanswertypequeueprioritylanguageversiontag_1tag_2tag_3tag_4tag_5tag_6tag_7tag_8
strstrstrstrstrstrstri64strstrstrstrstrstrstrstr
"Wesentlicher Sicherheitsvorfal…"Sehr geehrtes Support-Team,\n\…"Vielen Dank für die Meldung de…"Incident""Technical Support""high""de"51"Security""Outage""Disruption""Data Breach"nullnullnullnull
"Account Disruption""Dear Customer Support Team,\n\…"Thank you for reaching out, <n…"Incident""Technical Support""high""en"51"Account""Disruption""Outage""IT""Tech Support"nullnullnull
"Query About Smart Home System …"Dear Customer Support Team,\n\…"Thank you for your inquiry. Ou…"Request""Returns and Exchanges""medium""en"51"Product""Feature""Tech Support"nullnullnullnullnull
"Inquiry Regarding Invoice Deta…"Dear Customer Support Team,\n\…"We appreciate you reaching out…"Request""Billing and Payments""low""en"51"Billing""Payment""Account""Documentation""Feedback"nullnullnull
"Question About Marketing Agenc…"Dear Support Team,\n\nI hope t…"Thank you for your inquiry. Ou…"Problem""Sales and Pre-Sales""medium""en"51"Product""Feature""Feedback""Tech Support"nullnullnullnull
"Performance Problem with Data …"The data analytics tool experi…"We are addressing the performa…"Incident""Technical Support""high""en"400"Performance""IT""Tech Support"nullnullnullnullnull
"Datensperrung in der Kundschaf…"Es gab einen Datensperrungsunf…"Ich kann Ihnen bei dem Datensp…"Incident""Product Support""high""de"400"Security""IT""Tech Support""Bug"nullnullnullnull
"Problem mit der Videokonferenz…"Wichtigere Sitzungen wurden un…"Sehr geehrte/r [Name], leider …"Incident""Human Resources""low""de"400"Bug""Performance""Network""IT""Tech Support"nullnullnull
"Update Request for SaaS Platfo…"Requesting an update on the in…"Received your request for upda…"Change""IT Support""high""en"400"Feature""IT""Tech Support"nullnullnullnullnull
"Inquiry About Project Manageme…"Looking for detailed informati…"Dear [Name], thank you for you…"Request""Technical Support""low""en"400"Feature""Documentation""Feedback""IT"nullnullnullnull

Code: 13

import os; os.remove("tickets.db")

Code: 18

df.write_database("tickets", "sqlite:///tickets.db")

Output: 18

28587

Code: 6

!ls tickets.db

Output: 19

tickets.db

Note: 10

Open the SQLite customer service database

Code: 27 ()

import sqlite3

db = sqlite3.connect("tickets.db")
cursor = db.cursor()

Code: 66 ()

# Show first 5 tickets
results = cursor.execute("SELECT * FROM tickets LIMIT 2").fetchall()

# Get column names
columns = [desc[0] for desc in cursor.description]

print(columns)

Output: 256

['subject', 'body', 'answer', 'type', 'queue', 'priority', 'language', 'version', 'tag_1', 'tag_2', 'tag_3', 'tag_4', 'tag_5', 'tag_6', 'tag_7', 'tag_8']

Code: 13 ()

for row in results:
    print(row)

Output: 1,092

('Wesentlicher Sicherheitsvorfall', 'Sehr geehrtes Support-Team,\\n\\nich möchte einen gravierenden Sicherheitsvorfall melden, der gegenwärtig mehrere Komponenten unserer Infrastruktur betrifft. Betroffene Geräte umfassen Projektoren, Bildschirme und Speicherlösungen auf Cloud-Plattformen. Der Grund für die Annahme ist, dass der Vorfall eine potenzielle Datenverletzung im Zusammenhang mit einer Cyberattacke darstellt, was ein erhebliches Risiko für sensible Informationen und den laufenden Geschäftsbetrieb unserer Organisation bedeutet.\\n\\nUnsere initialen Untersuchungen haben ungewöhnliche Aktivitäten und Abweichungen bei den Geräten ergeben. Trotz der Umsetzung unserer standardisierten Behebungs- und Eindämmungsmaßnahmen konnte die Bedrohung bislang nicht vollständig eliminiert.', 'Vielen Dank für die Meldung des kritischen Sicherheitsvorfalls und die Bereitstellung der Übersicht über die betroffenen Geräte sowie der ergriffenen ersten Maßnahmen. Wir erkennen die Dringlichkeit und Schwere der Lage an und setzen alles daran, den Fall prioritär zu bearbeiten. Für eine umgehende Untersuchung benötigen wir zusätzliche Informationen: Bitte senden Sie uns spezifische Protokolle der betroffenen Projektoren, Bildschirme und Cloud-Speichersysteme, inklusive Zeitstempel verdächtiger Aktivitäten sowie ungewöhnlicher Fehlermeldungen. Falls möglich, fügen Sie auch eine Zusammenfassung der bereits durchgeführten Maßnahmen bei.', 'Incident', 'Technical Support', 'high', 'de', 51, 'Security', 'Outage', 'Disruption', 'Data Breach', None, None, None, None)
('Account Disruption', 'Dear Customer Support Team,\\n\\nI am writing to report a significant problem with the centralized account management portal, which currently appears to be offline. This outage is blocking access to account settings, leading to substantial inconvenience. I have attempted to log in multiple times using different browsers and devices, but the issue persists.\\n\\nCould you please provide an update on the outage status and an estimated time for resolution? Also, are there any alternative ways to access and manage my account during this downtime?', 'Thank you for reaching out, <name>. We are aware of the outage affecting the centralized account management system, and our technical team is actively working to resolve the issue. In the meantime, we suggest using alternative methods to manage your account, with a focus on restoring service as quickly as possible. We will provide an update as soon as the service is back online. We apologize for the inconvenience and appreciate your patience. If you have any further questions, please let us know.', 'Incident', 'Technical Support', 'high', 'en', 51, 'Account', 'Disruption', 'Outage', 'IT', 'Tech Support', None, None, None)

Note: 19

SQLite-vec library : create embedding vectors and virual tables

Note: 73

To explore vector search I have gone for just the first 1000 rows of the data for time sake, I will be using the body, subject fields. In production i would also need to use a seperate search for the answer field.

Note: 12 ()

install the library and run the imports

Code: 13

!pip install sqlite-vec sentence-transformers

Code: 33 ()

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')

Code: 22 ()

import sqlite_vec

db.enable_load_extension(True)
sqlite_vec.load(db)

Note: 30

Encode the columns subject, body and encode into vectors into a virual table for later similarity searching

Code: 220 ()

def embed_tickets(db, model, limit=1000):
    "Generate and store normalized embeddings for tickets"
    tickets = db.execute("SELECT rowid, subject, body FROM tickets LIMIT ?", [limit]).fetchall()
    texts = [f"{subj} {body}" for rowid, subj, body in tickets]
    embeddings = model.encode(texts, show_progress_bar=True, normalize_embeddings=True)
    db.executemany("INSERT OR REPLACE INTO tickets_vec(rowid, embedding) VALUES (?, ?)",
                   [(t[0], e) for t, e in zip(tickets, embeddings)])
    db.commit()
    return len(tickets)

embed_tickets(db, model, 1000)

Output: 1,906


Batches:   0%|          | 0/32 [00:00<?, ?it/s]
Batches:   3%|▎         | 1/32 [00:01<00:50,  1.64s/it]
Batches:   6%|▋         | 2/32 [00:03<00:50,  1.67s/it]
Batches:   9%|▉         | 3/32 [00:04<00:46,  1.61s/it]
Batches:  12%|█▎        | 4/32 [00:06<00:41,  1.47s/it]
Batches:  16%|█▌        | 5/32 [00:07<00:38,  1.41s/it]
Batches:  19%|█▉        | 6/32 [00:08<00:35,  1.38s/it]
Batches:  22%|██▏       | 7/32 [00:10<00:35,  1.41s/it]
Batches:  25%|██▌       | 8/32 [00:11<00:36,  1.50s/it]
Batches:  28%|██▊       | 9/32 [00:13<00:35,  1.53s/it]
Batches:  31%|███▏      | 10/32 [00:14<00:31,  1.42s/it]
Batches:  34%|███▍      | 11/32 [00:15<00:28,  1.35s/it]
Batches:  38%|███▊      | 12/32 [00:17<00:26,  1.31s/it]
Batches:  41%|████      | 13/32 [00:18<00:24,  1.29s/it]
Batches:  44%|████▍     | 14/32 [00:19<00:23,  1.29s/it]
Batches:  47%|████▋     | 15/32 [00:20<00:21,  1.28s/it]
Batches:  50%|█████     | 16/32 [00:22<00:21,  1.33s/it]
Batches:  53%|█████▎    | 17/32 [00:23<00:19,  1.33s/it]
Batches:  56%|█████▋    | 18/32 [00:24<00:18,  1.33s/it]
Batches:  59%|█████▉    | 19/32 [00:26<00:16,  1.29s/it]
Batches:  62%|██████▎   | 20/32 [00:27<00:16,  1.35s/it]
Batches:  66%|██████▌   | 21/32 [00:28<00:14,  1.31s/it]
Batches:  69%|██████▉   | 22/32 [00:30<00:12,  1.26s/it]
Batches:  72%|███████▏  | 23/32 [00:31<00:11,  1.22s/it]
Batches:  75%|███████▌  | 24/32 [00:32<00:09,  1.16s/it]
Batches:  78%|███████▊  | 25/32 [00:33<00:08,  1.19s/it]
Batches:  81%|████████▏ | 26/32 [00:34<00:07,  1.25s/it]
Batches:  84%|████████▍ | 27/32 [00:36<00:06,  1.30s/it]
Batches:  88%|████████▊ | 28/32 [00:37<00:05,  1.32s/it]
Batches:  91%|█████████ | 29/32 [00:38<00:03,  1.27s/it]
Batches:  94%|█████████▍| 30/32 [00:39<00:02,  1.19s/it]
Batches:  97%|█████████▋| 31/32 [00:40<00:01,  1.17s/it]
Batches: 100%|██████████| 32/32 [00:41<00:00,  1.08s/it]
Batches: 100%|██████████| 32/32 [00:41<00:00,  1.31s/it]
1000

Note: 18 ()

print the vectors and columns data (subject and body)

Code: 163 ()

row = db.execute("""
    SELECT t.rowid, t.subject, t.body, vec_to_json(v.embedding)
    FROM tickets t
    JOIN tickets_vec v ON t.rowid = v.rowid
    LIMIT 1
""").fetchone()

print(f"rowid: {row[0]}")
print(f"subject: {row[1][:80]}...")
print(f"body: {row[2][:150]}...")
print(f"embedding (first 10 values): {row[3][:100]}...")

Output: 177

rowid: 1
subject: Wesentlicher Sicherheitsvorfall...
body: Sehr geehrtes Support-Team,\n\nich möchte einen gravierenden Sicherheitsvorfall melden, der gegenwärtig mehrere Komponenten unserer Infrastruktur betr...
embedding (first 10 values): [-0.000137,0.049616,-0.040818,-0.044171,0.033442,0.036788,-0.029094,0.036328,-0.002529,0.081678,0.01...

Note: 10 ()

Search for tickets using cosine similarity

Code: 186 ()

def search_tickets(db, model, query, k=5):
    "Find k most similar tickets to query using cosine similarity"
    q_emb = model.encode(query, normalize_embeddings=True)
    results = db.execute("""
        SELECT t.rowid, t.subject, t.priority, distance
        FROM tickets_vec v
        JOIN tickets t ON t.rowid = v.rowid
        WHERE embedding MATCH ? AND k = ?
        ORDER BY distance
    """, [q_emb, k]).fetchall()
    return results

search_tickets(db, model, "network connectivity issues", k=5)

Output: 300

[(855, 'Network Connectivity Difficulties', 'medium', 0.7621697783470154),
 (680,
  'Connectivity Problems with Network Equipment',
  'medium',
  0.7979362607002258),
 (327, 'Network Connectivity Concerns', 'high', 0.7988336682319641),
 (252,
  'Connectivity Problem with Network Equipment',
  'high',
  0.815132737159729),
 (331,
  'Network Connectivity Difficulties Impacting Medical Equipment',
  'medium',
  0.8262456059455872)]

Note: 16

Alternative LLM genearted SQL statement approach

Note: 88

This approach we pass all the details of hte table and the fields and ask the LLM to generate the SQL to query the database. This is flexible as it can be used for any table and any field including the body, subject, and asnwer and date fields too.

Code: 19 ()

from lisette import *
from pprint import pprint
import sqlite3

Note: 7

The SQL prompts

Code: 19 ()

question = "I want to find information about smart home ticket"

Code: 385 ()

sql_prompt = f"""

# Role : 

You are a SQL query generator for a customer support ticket database.

Database information : 

Database: tickets.db
Table: tickets

Columns:
- subject (TEXT): Brief title of the ticket
- body (TEXT): Full description of the customer's issue
- answer (TEXT): The resolution/response provided
- type (TEXT): Ticket type (e.g., 'Incident', 'Request', 'Problem', 'Change')
- queue (TEXT): Department (e.g., 'Technical Support', 'Billing and Payments')
- priority (TEXT): 'low', 'medium', or 'high'
- language (TEXT): Language code (e.g., 'en', 'de')
- version (INTEGER): Version number
- tag_1 through tag_8 (TEXT): Category tags (may be NULL)

Action: 

- Given the user's question, generate a SQLite query to find relevant tickets.
- Use LIKE with wildcards for text searches. Return only the SQL query, nothing else.  
- The user might not use different words for hte same object or theme, please use the alternative words so we have complete search items. 
- Return only the raw SQL query with no markdown formatting.


User question: {question}

"""

Code: 3 ()

sql_prompt

Output: 561

"\n\n# Role : \n\nYou are a SQL query generator for a customer support ticket database.\n\nDatabase information : \n\nDatabase: tickets.db\nTable: tickets\n\nColumns:\n- subject (TEXT): Brief title of the ticket\n- body (TEXT): Full description of the customer's issue\n- answer (TEXT): The resolution/response provided\n- type (TEXT): Ticket type (e.g., 'Incident', 'Request', 'Problem', 'Change')\n- queue (TEXT): Department (e.g., 'Technical Support', 'Billing and Payments')\n- priority (TEXT): 'low', 'medium', or 'high'\n- language (TEXT): Language code (e.g., 'en', 'de')\n- version (INTEGER): Version number\n- tag_1 through tag_8 (TEXT): Category tags (may be NULL)\n\nAction: \n\n- Given the user's question, generate a SQLite query to find relevant tickets.\n- Use LIKE with wildcards for text searches. Return only the SQL query, nothing else.  \n- The user might not use different words for hte same object or theme, please use the alternative words so we have complete search items. \n- Return only the raw SQL query with no markdown formatting.\n\n\nUser question: I want to find information about smart home ticket\n\n"

Note: 16

Retrieving the SQL statement from the LLM

Code: 39 ()

model = 'claude-sonnet-4-5'

chat = Chat(model)
res = chat(sql_prompt)
display(res)

Output: 648

SELECT * FROM tickets WHERE subject LIKE '%smart home%' OR body LIKE '%smart home%' OR answer LIKE '%smart home%' OR subject LIKE '%home automation%' OR body LIKE '%home automation%' OR answer LIKE '%home automation%' OR subject LIKE '%IoT%' OR body LIKE '%IoT%' OR answer LIKE '%IoT%' OR subject LIKE '%connected home%' OR body LIKE '%connected home%' OR answer LIKE '%connected home%' OR tag_1 LIKE '%smart home%' OR tag_2 LIKE '%smart home%' OR tag_3 LIKE '%smart home%' OR tag_4 LIKE '%smart home%' OR tag_5 LIKE '%smart home%' OR tag_6 LIKE '%smart home%' OR tag_7 LIKE '%smart home%' OR tag_8 LIKE '%smart home%'

  • id: chatcmpl-bad4537e-a157-45ca-bd96-58cf0413a46f
  • model: claude-sonnet-4-5-20250929
  • finish_reason: stop
  • usage: Usage(completion_tokens=211, prompt_tokens=306, total_tokens=517, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None, cache_creation_tokens=0, cache_creation_token_details=CacheCreationTokenDetails(ephemeral_5m_input_tokens=0, ephemeral_1h_input_tokens=0)), cache_creation_input_tokens=0, cache_read_input_tokens=0)

Code: 4 ()

res.json()

Output: 1,264

{'id': 'chatcmpl-bad4537e-a157-45ca-bd96-58cf0413a46f',
 'created': 1765661732,
 'model': 'claude-sonnet-4-5-20250929',
 'object': 'chat.completion',
 'system_fingerprint': None,
 'choices': [{'finish_reason': 'stop',
   'index': 0,
   'message': {'content': "SELECT * FROM tickets WHERE subject LIKE '%smart home%' OR body LIKE '%smart home%' OR answer LIKE '%smart home%' OR subject LIKE '%home automation%' OR body LIKE '%home automation%' OR answer LIKE '%home automation%' OR subject LIKE '%IoT%' OR body LIKE '%IoT%' OR answer LIKE '%IoT%' OR subject LIKE '%connected home%' OR body LIKE '%connected home%' OR answer LIKE '%connected home%' OR tag_1 LIKE '%smart home%' OR tag_2 LIKE '%smart home%' OR tag_3 LIKE '%smart home%' OR tag_4 LIKE '%smart home%' OR tag_5 LIKE '%smart home%' OR tag_6 LIKE '%smart home%' OR tag_7 LIKE '%smart home%' OR tag_8 LIKE '%smart home%'",
    'role': 'assistant',
    'tool_calls': None,
    'function_call': None,
    'provider_specific_fields': {'citations': None,
     'thinking_blocks': None}}}],
 'usage': {'completion_tokens': 211,
  'prompt_tokens': 306,
  'total_tokens': 517,
  'completion_tokens_details': None,
  'prompt_tokens_details': {'audio_tokens': None,
   'cached_tokens': 0,
   'text_tokens': None,
   'image_tokens': None,
   'cache_creation_tokens': 0,
   'cache_creation_token_details': {'ephemeral_5m_input_tokens': 0,
    'ephemeral_1h_input_tokens': 0}},
  'cache_creation_input_tokens': 0,
  'cache_read_input_tokens': 0}}

Code: 28 ()

sql_result = res.json()['choices'][0]['message']['content']
pprint(sql_result)

Output: 538

("SELECT * FROM tickets WHERE subject LIKE '%smart home%' OR body LIKE '%smart "
 "home%' OR answer LIKE '%smart home%' OR subject LIKE '%home automation%' OR "
 "body LIKE '%home automation%' OR answer LIKE '%home automation%' OR subject "
 "LIKE '%IoT%' OR body LIKE '%IoT%' OR answer LIKE '%IoT%' OR subject LIKE "
 "'%connected home%' OR body LIKE '%connected home%' OR answer LIKE "
 "'%connected home%' OR tag_1 LIKE '%smart home%' OR tag_2 LIKE '%smart home%' "
 "OR tag_3 LIKE '%smart home%' OR tag_4 LIKE '%smart home%' OR tag_5 LIKE "
 "'%smart home%' OR tag_6 LIKE '%smart home%' OR tag_7 LIKE '%smart home%' OR "
 "tag_8 LIKE '%smart home%'")

Note: 13

Running the SQL statement on the tickets database

Code: 21 ()

db = sqlite3.connect("tickets.db")
cursor = db.cursor()

Code: 57 ()

# Show first 5 tickets
results = cursor.execute(sql_result).fetchall()

# Get column names
columns = [desc[0] for desc in cursor.description]

print(columns)

Output: 256

['subject', 'body', 'answer', 'type', 'queue', 'priority', 'language', 'version', 'tag_1', 'tag_2', 'tag_3', 'tag_4', 'tag_5', 'tag_6', 'tag_7', 'tag_8']

Note: 15

Length of the reults of the LLM

Code: 4 ()

len(results)

Output: 16

125

Note: 10

show the first 3 results

Code: 6 ()

results[:3]

Output: 1,696

[('Query About Smart Home System Integration Features',
  'Dear Customer Support Team,\\n\\nI hope this message reaches you well. I am reaching out to request detailed information about the capabilities of your smart home integration products listed on your website. As a potential customer aiming to develop a seamlessly interconnected home environment, it is essential to understand how your products interact with various smart home platforms.\\n\\nCould you kindly provide detailed compatibility information with popular smart home ecosystems such as Amazon Alexa, Google Assistant, and Apple?',
  'Thank you for your inquiry. Our products support integration with Amazon Alexa, Google Assistant, and Apple HomeKit. Compatibility details can differ depending on the specific item; please let us know which models you are interested in. The setup process is generally user-friendly but may require professional installation. We regularly update our software to provide enhanced features. For comprehensive information on compatibility with upcoming updates, please specify the models you are considering.',
  'Request',
  'Returns and Exchanges',
  'medium',
  'en',
  51,
  'Product',
  'Feature',
  'Tech Support',
  None,
  None,
  None,
  None,
  None),
 ('Anfrage zu Sicherheitsvorkehrungen',
  'Sehr geehrter Kundendienst,\\n\\nich bitte um umfassende Details zu den Sicherheitsmaßnahmen bei EMR/PACS-Integrationen mit IoT-Medizinprodukten. Besonders wichtig sind mir die Einhaltung von Compliance-Standards, der Schutz der Privatsphäre der Patienten sowie der Datenschutz. Könnten Sie außerdem Einblicke in Ihre 24/7-Incident-Response-Protokolle geben? Ich möchte wissen, wie Ihr Team schnell und effizient auf potenzielle Sicherheitsgefahren reagiert, um die Zuverlässigkeit Ihrer Dienste einschätzen zu können.\\n\\nBitte fügen Sie auch Zertifizierungen und regulatorische Rahmenwerke bei, die Ihre Lösungen erfüllen, beispielsweise HIPAA oder GDPR. Eine Beschreibung Ihrer Überwachungssysteme und Reaktionsstrategien wäre ebenfalls sehr hilfreich.',
  'Vielen Dank für Ihre Anfrage. Unsere EMR/PACS-Integrationslösungen mit IoT-Medizinprodukten erfüllen alle relevanten Standards wie HIPAA, GDPR sowie ISO 27001. Wir setzen End-to-End-Verschlüsselung, sichere Zugriffskontrollen und eine kontinuierliche Netzwerküberwachung ein. Unser 24/7-Incident-Response-Team folgt klar definierten Eskalationsprotokollen, um Sicherheitsbedrohungen schnell zu erkennen und zu behandeln, wodurch Risiken minimiert werden. Detaillierte Protokolle der Vorfälle werden geführt, und im Falle von Verstößen werden unsere Kunden umgehend benachrichtigt. Falls Sie weitere Informationen oder Dokumentationen zu Zertifizierungen benötigen, teilen Sie uns bitte Ihre genauen Anforderungen mit, wir stellen diese gern bereit.',
  'Request',
  'Returns and Exchanges',
  'low',
  'de',
  51,
  'Security',
  'Compliance',
  'Data Privacy',
  'Product',
  'IT',
  'Tech Support',
  None,
  None),
 ('Alert for Unauthorized Entry',
  'Dear Customer Support,\\n\\nI have recently been notified of attempts at unauthorized entry into my IoT gadgets. This has raised significant worries about the security of my home network. I would be grateful for immediate help in examining this issue to ensure my devices are fully secured. Please advise on additional security protocols I can implement and keep me informed about any updates related to this problem. Thank you for your swift response to this urgent security matter.\\n\\nSincerely,\\n[Your Name]',
  "We appreciate your reaching out regarding the alert of unauthorized access to your IoT devices. Kindly specify the device models and provide the exact alert details you've received so we can proceed with a thorough investigation. In the meantime, we suggest changing your device passwords, updating their firmware, and enabling two-factor authentication where possible. Feel free to ask for step-by-step guidance on these security measures if needed.",
  'Problem',
  'Customer Service',
  'low',
  'en',
  51,
  'Security',
  'IT',
  'Tech Support',
  'Alert',
  None,
  None,
  None,
  None)]

Note: 10 ()

all rolled up into one function

Code: 559 ()

def llm_search(question, db_path="tickets.db", model='claude-sonnet-4-5'):
    sql_prompt = f"""
# Role : 
You are a SQL query generator for a customer support ticket database.

Database information : 
Database: {db_path}
Table: tickets

Columns:
- subject (TEXT): Brief title of the ticket
- body (TEXT): Full description of the customer's issue
- answer (TEXT): The resolution/response provided
- type (TEXT): Ticket type (e.g., 'Incident', 'Request', 'Problem', 'Change')
- queue (TEXT): Department (e.g., 'Technical Support', 'Billing and Payments')
- priority (TEXT): 'low', 'medium', or 'high'
- language (TEXT): Language code (e.g., 'en', 'de')
- version (INTEGER): Version number
- tag_1 through tag_8 (TEXT): Category tags (may be NULL)

Action: 
- Given the user's question, generate a SQLite query to find relevant tickets.
- Use LIKE with wildcards for text searches. Return only the SQL query, nothing else.  
- The user might not use different words for the same object or theme, please use the alternative words so we have complete search items. 
- Return only the raw SQL query with no markdown formatting.

User question: {question}
"""
    chat = Chat(model)
    res = chat(sql_prompt)
    sql_query = res.json()['choices'][0]['message']['content']
    
    db = sqlite3.connect(db_path)
    cursor = db.cursor()
    results = cursor.execute(sql_query).fetchall()
    columns = [desc[0] for desc in cursor.description]
    db.close()
    
    return {'columns': columns, 'results': results, 'sql': sql_query}

Note: 7

basic search no priority

Code: 15 ()

r = llm_search("smart home ticket")

Code: 19 ()

print(f"Found {len(r['results'])} tickets")

Output: 22

Found 125 tickets

Code: 16 ()

print(f"SQL: {r['sql']}")

Output: 276

SQL: SELECT * FROM tickets WHERE subject LIKE '%smart home%' OR body LIKE '%smart home%' OR answer LIKE '%smart home%' OR subject LIKE '%home automation%' OR body LIKE '%home automation%' OR answer LIKE '%home automation%' OR subject LIKE '%IoT%' OR body LIKE '%IoT%' OR answer LIKE '%IoT%' OR subject LIKE '%connected home%' OR body LIKE '%connected home%' OR answer LIKE '%connected home%'

Note: 9

basic search with high priority

Code: 21 ()

r = llm_search("smart home ticket with a high priority")

Code: 19 ()

print(f"Found {len(r['results'])} tickets")

Output: 22

Found 48 tickets

Code: 16 ()

print(f"SQL: {r['sql']}")

Output: 211

SQL: SELECT * FROM tickets WHERE (subject LIKE '%smart home%' OR body LIKE '%smart home%' OR subject LIKE '%home automation%' OR body LIKE '%home automation%' OR subject LIKE '%IoT%' OR body LIKE '%IoT%' OR subject LIKE '%connected home%' OR body LIKE '%connected home%') AND priority = 'high'

Prompt: 31

ok, after all this, what could simality search do that a SQL generated search not do ?

Assistant: 625

Looking at your exploration, here are the key advantages of similarity search over LLM-generated SQL:

1. Semantic understanding without exact terms

Similarity search finds conceptually related tickets even when they use completely different vocabulary:

  • Query: "my wifi keeps dropping" → finds tickets about "intermittent network connectivity", "unstable connection", "wireless disruption"
  • The LLM SQL approach requires the LLM to predict synonyms upfront, which it can miss

2. Handles typos and variations gracefully

Vector embeddings are robust to:

  • Misspellings: "conection problm" still finds connectivity issues
  • Mixed languages in the same query
  • Informal vs formal language (user says "my internet is borked" → finds "network service degradation")

3. Cross-lingual semantic search

Your dataset has both English and German tickets. Similarity search can find semantically similar tickets across languages without translation:

  • Query in English: "billing error" → can surface relevant German "Rechnungsfehler" tickets
  • SQL LIKE searches are language-locked

4. Contextual ranking by relevance

Vector search returns a distance score that reflects semantic similarity. SQL with LIKE either matches or doesn't - no nuanced ranking of "how relevant" each result is unless you build complex scoring logic.

5. Handles vague or exploratory queries

When users don't know exact keywords:

  • "Something's wrong with my account but I can't log in" → similarity search understands the authentication + account access context
  • LLM-generated SQL needs to guess the right LIKE patterns, which becomes a long OR chain

When LLM-generated SQL is better:

  • Structured filters: priority, date ranges, specific queues, ticket types
  • Exact phrase matching
  • Complex boolean logic combining multiple fields
  • No embedding overhead or search latency

The hybrid approach (which you'd likely want in production) uses similarity search for the semantic "what is this about" part, then filters with SQL for structured criteria like dates, priority, or department.