#111 — Generate LlmEngine “history” from dashboard and widget.
Repo: Twill-AI/facade State: closed | Status: done Assignee: Unassigned
Created: 2024-09-19 · Updated: 2025-03-24
Description
Separated from https://github.com/Twill-AI/facade/issues/106 (also see https://twill-network.slack.com/archives/D06UREE39J5/p1726736230263629 from private communication with Martin).
Blocked by some parts of https://github.com/Twill-AI/twill-llm-engine/issues/56
Context is expected to be provided for answer_in_chat call into “run_config” dictionary as new fields
- tenant_info: str # this context is always expected
- widget_info: dict | None # this is expected only for widget chat
- dashboard_info: dict | None # this is expected ony for dashboard chat
- origin: general_chat | widget_chat | dashboard_chat # type of place
answer_in_chatis executed in
tenant_info field is populated from tenant.llm_info as is.
widget_info field contains following fields (matching widget table fields):
- name
- type
- sql_query
- echarts_code (use “code” name in dict)
- values_headers
- values_rows
- values_update_interval_sec
- created_at
- updated_at
dashboard_info field
- name
- description
- created_at
- updated_at
- widgets : list of widget_info objects that are part of the dashboard
AC:
- Facade may generate information (context) about dashboards and widgets compatible with LlmEngine to make it context-aware for responds to prompts in relevant serving LLM Chats.
- (unrelated, from separate task) remove “tenant_role” from items provided to LlmEngine
answer_in_chatcal
Notes
Add implementation notes, blockers, and context here
Related
Add wikilinks to related people, meetings, or other tickets