AI Model Builder FAQs

From ER/Studio Data Architect
Jump to: navigation, search

Go Up to Generating a Model Using AI

What is the AI Model Builder feature?

The AI feature allows the user to kick-start the modeling process by transforming natural language into a first-cut logical data model. This helps new users learn data modeling and the use of ER/Studio and helps experienced users as a productivity aid.

The feature is off by default and requires the user to accept the terms and conditions associated with it before they can access it.

Use the following link to access a demo video of this feature: ER/Studio's AI Data Modeling Tool

What Terms and Conditions apply to this feature?

The Terms and Conditions for using this feature can be downloaded with this link.

Which LLM is used for this feature?

We currently use Open AI's GPT4.1, but we may change this at any time.

Are the prompts stored and used to train the LLM?

No. As part of our contract with the AI provider, prompts are not used to train the AI. The AI provider does not retain any of your data. ER/Studio does store logs of all prompts and responses on our servers to investigate issues and improve the quality of the service.

Can we restrict use of this feature?

The feature in Data Architect connects to an Idera service with the address aida-api.idera.com. Customers can block the server address on their company firewall, which will prevent all users from accessing the service.

There is also a license switch, where you can request a new license that removes this feature. If you require the feature to be switched off for users then contact support, who can issue a new license.

Where is the model hosted? Whose tenant and which country?

We currently use OpenAI service for the models. So the models are run on OpenAI servers.

The OpenAI servers are currently located in the USA. For more information about the location, see Infrastructure.

What data is stored on Idera servers from each transaction?

We keep a log on Idera servers, containing the following data:

  • A random unique identifier for the user
  • Prompt
  • Response
  • Timestamp

We use this data to improve the service and find faults. It is not used for any other purpose and is not made available to any third-parties.

Where does the AI get its knowledge to provide responses?

Knowledge comes from our proprietary prompt instructions and the AI provider's LLM, currently OpenAI GPT-4.1.

Please refer to OpenAI's documentation on training data for GPT-4.1.

What data is stored on the OpenAI servers from each transaction?

Data sent to the OpenAI API is not used to train or improve OpenAI models.

For more information about how OpenAI uses your data, see Data controls in the OpenAI platform.

Can we use our own AI model?

At this time we are using our own AI model provided through the central Idera service. This gives us the opportunity to refine the service. At some point, we will look into providing the ability for you to use your own model, but we don't have a timeline for this yet. If this is of interest, please submit a support ticket and let us know more.

What does the topology look like?

AI Model Builder Topology.png