eazyBI Assistants usage of external LLM services
This page describes in more details how eazyBI assistants use external LLM services and what customer data are sent to LLM services.
LLM providers
eazyBI uses Google Cloud Platform (GCP) Vertex AI platform to access Large Language Model (LLM) services. eazyBI uses LLM services in europe-west1 (Belgium) and us-central1 (USA) regions. eazyBI currently uses the following LLM services:
- Anthropic Claude Sonnet and Haiku models
- Google text embedding models
eazyBI uses paid LLM service subscriptions and ensures that appropriate request rate limits are used to ensure enough capacity to serve customer requests.
eazyBI might use other leading LLMs (e.g. OpenAI or Google Gemini) in the future if they will provide better results for eazyBI assistants’ needs.
LLM system prompts
eazyBI uses prompt templates for each type of assistant (report builder, calculated member MDX formula, custom field JavaScript code) to build the system prompt for LLM models. Prompt templates contain the following components:
- Static common instructions for each type of assistant that do not contain any customer-specific data.
- User prompt-specific context data that is retrieved using vector similarity search from the following sources (which do not contain any customer-specific data):
- Fragments of public eazyBI documentation pages from https://docs.eazybi.com
- Examples of eazyBI report definitions and calculated member MDX formulas.
- Customer-specific data cube metadata and custom field metadata.
Customer data cube metadata contains the following information:
-
List of dimensions and hierarchies, for example:
[Project], [Project.Category], [Reporter], [Assignee], [Issue Type], [Issue Type.By type], [Issue Type.By name], [Priority], [Status], [Status.Category], ...
-
List of levels in these dimensions, for example:
[Project].[Project], [Project].[Component], [Project.Category].[Category], [Project.Category].[Project], [Project.Category].[Component], [Reporter].[User], [Assignee].[User], [Issue Type].[Issue Type], [Issue Type.By type].[Type], [Issue Type.By type].[Issue Type], ...
-
List of measures, for example:
[Measures].[Issues created], [Measures].[Issues due], [Measures].[Issues resolved], [Measures].[Issues closed], [Measures].[Issues with due date], ...
-
List of calculated members, for example:
[Measures].[Open issues], [Measures].[Average resolution days], [Measures].[Average resolution workdays], [Measures].[Average closing days], [Measures].[Average age days], [Measures].[Average age workdays], ...
Metadata do not include:
- List of individual dimension members.
- Values of the measures.
- Formulas of the measures and calculated members.
For the custom field assistant the following metadata are sent:
- The current custom field display name, the internal name with custom field ID, and the data type.
LLM user prompts
User-entered messages will be sent to LLMs as a user prompt. In addition, users can include in the message:
-
The current report definition which contains the metadata of dimensions and measures on report columns, rows, and pages, selected dimension members, table or chart display options, report-specific calculation formulas (the same report definition in the JSON format that can be exported from the UI).
-
The current calculated member MDX formula.
LLM function calls
eazyBI assistants provide several functions (sometimes called tools) to LLMs that can be used to improve the quality and relevance of LLM responses. Functions are retrieving customer-specific data from the current eazyBI account data cube and including the result in the LLM prompt. Only relevant function calls are used with the specific arguments for the current user prompt.
-
lookup_members
LLMs can use the function to lookup exact dimension member name. For example, it can lookup “bug” from the Issue Type dimension to find the full member name
[Issue Type].[Bug]
. Or it can lookup project using the project key “ITS” to find the full member name[Project].[IT Services]
. -
level_members
If lookup_members cannot find the necessary dimension member then level_members function might be used to find all level members and LLM will try to identify the member that user intended to use. For example if lookup_members didn’t find “feature” in the Issue Type dimension then level_members might be used and will return “Bug”, “Epic”, and “Story” level member names. This function can be used only for small dimension levels to return limited set of member names.
-
dimension_properties
When writing MDX formulas, this function might be called to show all properties of the dimension that can be used in MDX formula. For example, for Issue dimension it will return the list of all standard properties (e.g. “Created at”, “Updated at”, “Issue type ID”, …) and list of imported custom field names. This function will return only property / field names and not their values.
-
validate_mdx_formula, test_mdx_formula
If user asks to fix the invalid MDX formula, then validate_mdx_formula will be used to validate its MDX syntax and test_mdx_formula might be used to validate if it returns any result.
For Jira custom field JavaScript code assistant the following additional functions might be used:
-
search_custom_field
Find the available Jira custom field by its display name and return its ID, internal name, and data type.
-
search_jira_issues
Find sample Jira issues (up to 10) using the provided JQL query and return the values of the requested fields. Issue search will be performed with the current user permissions. Only the standard or custom fields relevant to the current user prompt will be requested to find sample data to correctly generate the requested JavaScript code that uses these fields.
LLM usage of customer-specific data
Previously described customer-specific data are used only for the current user conversation thread. Customer-specific metadata and function results are not shared between different user conversation threads.
The current LLM services DO NOT use customer conversation inputs or outputs to train LLM models (see Anthropic privacy policy). In the future, only those LLM providers (like Anthropic, Google Gemini, or OpenAI) will be used that do not use customer inputs or outputs for LLM model training.
Network communication with LLM services
eazyBI communicates with LLM services using TLS encryption and uses standard Google Cloud authentication to authenticate access to Vertex AI services.
Remote assistants (from Jira or Confluence DC apps or from Private eazyBI) use TLS-encrypted communication to access eazybi.com assistants services. JWT token authentication is used to authenticate remote assistants using the remote instance-specific shared encryption key.
Assistant request rate limits
eazyBI uses assistant request rate limits that are sufficient for the reasonable use of AI assistants but prevent potential high-volume misuse for non-intentional use cases.