twill-llm-engine
Repo: Twill-AI/twill-llm-engine
Language: Jupyter Notebook
Description: Twill AI LLM Agents
README
Twill LLM engine
Main LLM routines that are the basis of the analytics platform.
Tech stack 💻📚⚙️
- Langchain: Prompts, Models, Tools
- Langgraph: Agents
- Langsmith: Tracing
- Spacy: General NLP (hardcoded embeddings, POS tagging)
- Pandas: Data wrangling
- AzureOpenAI: LLM provider
LLM engine interface 🛠️💬🌟
The answer_in_chat method is the main method to interact with the LLM engine. It returns
an AsyncIterator that streams out events as the response is being generated. To see what
types of events are available check interface.py. This method is only for chat based
interactions. In the future we will add task specific methods for other functionalities.
LLM LangGraph structure 🧩📐🖼️
When you call the answer_in_chat method you start from the __start__ node in the
MAIN AGENT graph. The first step is an initial check to route conversations to task specific agents.
If a message is sent from a general chat or a dashboard chat it goes through the general route. If,
on the other hand, it is sent from a widget chat, it goes through the widget route. Both routes are
based on a react architecture
but the set of tools they have available is different. The general route also calculates a summary if
it is a first interaction (no previous history)
After finishing both agents go through some post processing steps and user suggestions are generated.
Main Agent🤖📈📋

Steps descriptions
check_first_interaction: Set some initial configs and route to specific agentsassistant: react type agent that can call tools iteratively or respond with texttool: a node to execute the tools (description of tools below)generate_first_interaction_summary: If called from a general query and is first interaction, generate summary -generate_questions: generate a suggestion of what the user could ask next.post_process_conversation: shorten and summarize conversation so far if it is too long
Tools 🛠️🔍🗃️
The general agent uses:
- SQL agent
- Plotting agent
- KPI tool
The widget agent uses:
- For chats opened from a Plot
- SQL agent
- Widget modification tool
- Plotting agent
- For chats opened from a KPI tile
- SQL agent
- KPI tool
SQL Agent (used as tool) 🛠️
This tool is in fact an agent itself. When called it allows the LLM to query the database.
Steps:
preprocess_nl_query: run a long series of regexes to detect user defined entities (like customers or vendors). Since it doesn’t scale well with string length it is skipped for long queries.get_relevant_tables_presql: Uses the PRESQL techn
Recent Open Issues
- #327 [BUG] Wrong fields extracted from application
- #307 Add new KPI table descriptions to make it available to LLM
- #306 Update LLM generation of flexible widgets to use the Jinja rendering from python shared.
- #304 Improve LLM SQL generation context by using a richer query on
pg_stats - #230 Create widget template repository for analytics
- #308 chat should recognize successful transactions as sales
- #296 Review SQL related error logging (non-syntax errors) in LLM
- #300 Include Commerce Customers and Payments’ Canonical Customers as an entity that gets parsed in LLM prompts.
- #299 LLM has context of how many integrations are connected and uses it.
- #229 Modify LLM free-form widget creation route to support filters
Patterns & Notes
Dash adds architecture notes here as work progresses
Related
- Patterns — Cross-cutting code patterns
- Debugging-Guide — Known bugs and fixes