eazyBI Assistants FAQ
Overview
This page and our Privacy Policy describe how eazyBI handles your data when you interact with assistants embedded in the eazyBI app.
eazyBI assistants are a new experimental feature that is constantly improved based on usage statistics and user feedback.
How do eazyBI assistants work?
eazyBI assistants combine the leading LLMs (large language models) with the data from eazyBI documentation and sample data cubes and reports to answer questions about eazyBI, build new reports, write MDX formulas or custom field JavaScript code.
When receiving a user’s message, the assistant finds semantically similar documentation fragments or sample reports and includes these examples in the standard prompt instructions for the LLM. Depending on the context of the user messages, the assistant’s response might contain example report definitions, sample MDX or JavaScript code, or documentation fragments and links.
When continuing the conversation in the same thread, all previous messages will be included in the context for the next response. It is recommended to start a new topic (using the +
button) when the previous messages are not related to the new message.
What LLMs are used?
eazyBI assistants currently are using the leading Claude 3.5 LLM models by Anthropic and Gemini 1.5 Pro model that are provided by Google Cloud Vertex API. This means that eazyBI uses the same Google Cloud Platform services where eazyBI Cloud services are deployed.
eazyBI is testing and evaluating other LLMs (e.g. GPT-4o by OpenAI) and might use other models in the future if they will provide better results for eazyBI assistants’ needs.
How should I use eazyBI assistants?
Assistants use a large amount of knowledge about eazyBI and can quickly suggest sample reports, MDX formulas or JavaScript code examples that otherwise would be harder to find. However, LLMs generate responses based on user inputs and provided instructions and are probabilistic in nature. Because of this, these models can make mistakes and sometimes behave in ways that are inaccurate, incomplete, or unreliable. Always test generated reports, MDX formulas, or JavaScript code and validate that they work as expected. Contact eazyBI support if the provided solutions do not meet your needs and you need additional help.
What data are sent to external LLM services?
User messages, along with standard instructions, documentation examples, sample report examples, and metadata about cube measures and dimensions, are sent to external LLM services.
Imported data (from source applications) are not sent to external LLM services, except when explicitly requested by users.
External LLM services do not store received user messages and do not use user messages to train their models.
What data are stored in eazyBI
eazyBI stores recent assistant conversation messages in its database. eazyBI support employees might review assistant conversations for the purpose of evaluating provided responses and improving assistants with better instructions, documentation, and sample report examples.
eazyBI does not provide assistant conversation data to any third parties except used LLM services. Assistant conversation data are not shared between different customer sites.
Do not enter confidential information in assistant conversations or any data you would not want an eazyBI reviewer to see.
How to provide feedback?
Use thumbs-up and thumbs-down buttons to provide feedback about good or bad assistant responses and optional comments to describe whether the assistant’s response was helpful or not. eazyBI support employees will review and use the feedback to improve assistant instructions. You can also contact eazyBI support if you would like to provide more detailed feedback and receive a response.