Creating An Application
In Dify, an "application" refers to a real-world scenario application built on large language models such as GPT. By creating an application, you can apply intelligent AI technology to specific needs. It encompasses both the engineering paradigms for developing AI applications and the specific deliverables.
In short, an application delivers to developers:
-
A user-friendly, encapsulated LLM API that can be called directly by backend or frontend applications with token authentication
-
A ready-to-use, beautiful, and hosted Web App that you can develop further using the Web App templates
-
A set of easy-to-use interfaces for Prompt Engineering, context management, log analysis, and annotation
You can choose one or all of them to support your AI application development.
Dify offers two types of applications: text generation and conversational. More application paradigms may appear in the future (we should keep up-to-date), and the ultimate goal of Dify is to cover more than 80% of typical LLM application scenarios. The differences between text generation and conversational applications are shown in the table below:
Text Generation
Conversational
WebApp Interface
Form + Results
Chat style
API Endpoint
completion-messages
chat-messages
Interaction Mode
One question and one answer
Multi-turn dialogue
Streaming results return
Supported
Supported
Context Preservation
Current time
Continuous
User input form
Supported
Supported
Datasets&Plugins
Supported
Supported
AI opening remarks
Not Supported
Supported
Scenario example
Translation, judgment, indexing
Chat or everything
Launch the WebApp quickly
One of the benefits of creating AI applications with Dify is that you can launch a user-friendly Web application in just a few minutes, based on your Prompt orchestration.
-
If you are using the self-hosted open-source version, the application will run on your server.
-
If you are using the cloud version, the application will be hosted on LangGenius.app.
Launch WebApp
In the application overview page, you can find a card for the AI site (WebApp). Simply enable WebApp access to get a shareable link for your users.

We provide a sleek WebApp interface for both of the following applications:
-
Text Generation (go to preview)
-
Conversational (go to preview)
Configure your WebApp
Click the settings button on the WebApp card to configure some options for the AI site. These will be visible to the end users:
-
Icon
-
Name
-
Application Description
-
Interface Language
-
Copyright Information
-
Privacy Policy Link
Embed your WebApp
Dify supports embedding your AI application into your business website. With this capability, you can create AI customer service and business knowledge Q&A applications with business data on your official website within minutes. Click the embed button on the WebApp card, copy the embed code, and paste it into the desired location on your website.
-
For iframe tag:
Copy the iframe code and paste it into the tags (such as <div>, <section>, etc.) on your website used to display the AI application.
-
For script tag:
Copy the script code and paste it into the <head> or <body> tags on your website.

For example, if you paste the script code into the section of your official website, you will get an AI chatbot on your website:

Prompt Engineering
Master the use of Dify for orchestrating applications and practicing Prompt Engineering, and build high-value AI applications with the two built-in application types.
The core concept of Dify is the declarative definition of AI applications. Everything including Prompts, context, plugins, etc. can be described in a YAML file (which is why it is called Dify). It ultimately presents a single API or out-of-the-box WebApp.
At the same time, Dify provides an easy-to-use Prompt orchestration interface where developers can visually orchestrate various application features based on Prompts. Doesn't it sound simple?
For both simple and complex AI applications, good Prompts can effectively improve the quality of model output, reduce error rates, and meet the needs of specific scenarios. Dify currently provides two common application forms: conversational and text generator. This section will guide you through visually orchestrating AI applications.
Application Orchestration Steps
-
1.Determine application scenarios and functional requirements
-
2.Design and test Prompts and model parameters
-
3.Orchestrate Prompts
-
4.Publish the application
-
5.Observe and continuously iterate
Hands-on Practice
ALL
The Differences between Application Types
Text generation and conversation applications in Dify have slight differences in prompt orchestration. Conversation applications require incorporating "conversation lifecycle" to meet more complex user scenarios and context management needs.
Prompt Engineering has developed into a field with tremendous potential, worthy of continuous exploration. Please continue reading to learn about the orchestration guidelines for both types of applications.
Extended Reading
Text Generator
Text generation applications are applications that can automatically generate high-quality text based on prompts provided by users. They can generate various types of text, such as article summaries, translations, etc.
Applicable scenarios
Text generation applications are suitable for scenarios that require a large amount of text creation, such as news media, advertising, SEO, marketing, etc. They can provide efficient and fast text generation services for these industries, reduce labor costs, and improve production efficiency.
How to compose
Text generation applications supports: prefix prompt words, variables, context, and generating more similar content.
Here, we use a translation application as an example to introduce the way to compose a text generation applications
Step 1: Create the application
Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Text Generator" as the application type.

Step 2: Compose the Application
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “Prompt Eng.” to compose the application.

2.1 Fill in Prefix Prompts
Prompts are used to give a series of instructions and constraints to the AI response. Form variables can be inserted, such as {{input}}. The value of variables in the prompts will be replaced with the value filled in by the user.
The prompt we are filling in here is: Translate the content to: {{language}}. The content is as follows:

2.2 Adding Context
If the application wants to generate content based on private contextual conversations, our dataset feature can be used. Click the "Add" button in the context to add a dataset.

2.3 Adding Future: Generate more like this
Generating more like this allows you to generate multiple texts at once, which you can edit and continue generating from. Click on the "Add Future" button in the upper left corner to enable this feature.

2.4 Debugging
We debug on the right side by entering variables and querying content. Click the "Run" button to view the results of the operation.

If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:

2.5 Publish
After debugging the application, click the "Publish" button in the upper right corner to save the current settings
Share Application
You can find the sharing address of the application on the overview page. Click the "Preview" button to preview the shared application. Click the "Share" button to obtain the sharing link address. Click the "Settings" button to set the information of the shared application.

If you want to customize the application shared outside, you can Fork our open source WebApp template. Based on the template, you can modify the application to meet your specific situation and style requirements.
Conversation Application
Conversation applications use a one-question-one-answer mode to have a continuous conversation with the user.
Applicable scenarios
Conversation applications can be used in fields such as customer service, online education, healthcare, financial services, etc. These applications can help organizations improve work efficiency, reduce labor costs, and provide a better user experience.
How to compose
Conversation applications supports: prompts, variables, context, opening remarks, and suggestions for the next question.
Here, we use a interviewer application as an example to introduce the way to compose a conversation applications.
Step 1 Create an application
Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Chat App" as the application type.

Step 2: Compose the Application
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “Prompt Eng.” to compose the application.

2.1 Fill in Prompts
Prompts are used to give a series of instructions and constraints to the AI response. Form variables can be inserted, such as {{input}}. The value of variables in the prompts will be replaced with the value filled in by the user.
The prompt we are filling in here is:
I want you to be the interviewer for the {{jobName}} position. I will be the candidate, and you will ask me interview questions for the position of {{jobName}} developer. I hope you will only answer as the interviewer. Don't write all the questions at once. I wish for you to only interview me. Ask me questions and wait for my answers. Don't write explanations. Ask me one by one like an interviewer and wait for my answer.
When I am ready, you can start asking questions.

For a better experience, we will add an opening dialogue: "Hello, {{name}}. I'm your interviewer, Bob. Are you ready?"
To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature:

And then edit the opening remarks:

2.2 Adding Context
If an application wants to generate content based on private contextual conversations, it can use our dataset feature. Click the "Add" button in the context to add a dataset.

2.3 Debugging
We fill in the user input on the right side and debug the input content.

If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:

We support the GPT-4 model.
2.4 Publish
After debugging the application, click the "Publish" button in the upper right corner to save the current settings.
Share Application
On the overview page, you can find the sharing address of the application. Click the "Preview" button to preview the shared application. Click the "Share" button to get the sharing link address. Click the "Settings" button to set the shared application information.

If you want to customize the application that you share, you can Fork our open source WebApp template. Based on the template, you can modify the application to meet your specific needs and style requirements.
Developing with APIs
Dify offers a "Backend-as-a-Service" API, providing numerous benefits to AI application developers. This approach enables developers to access the powerful capabilities of large language models (LLMs) directly in frontend applications without the complexities of backend architecture and deployment processes.
Benefits of using Dify API
-
Allow frontend apps to securely access LLM capabilities without backend development
-
Design applications visually with real-time updates across all clients
-
Well-encapsulated original LLM APIs
-
Effortlessly switch between LLM providers and centrally manage API keys
-
Operate applications visually, including log analysis, annotation, and user activity observation
-
Continuously provide more tools, plugins, and datasets
How to use
Choose an application, and find the API Access in the left-side navigation of the Apps section. On this page, you can view the API documentation provided by Dify and manage credentials for accessing the API.

You can create multiple access credentials for an application to deliver to different users or developers. This means that API users can use the AI capabilities provided by the application developer, but the underlying Prompt engineering, datasets, and tool capabilities are encapsulated.
In best practices, API keys should be called through the backend, rather than being directly exposed in plaintext within frontend code or requests. This helps prevent your application from being abused or attacked.
For example, if you're a developer in a consulting company, you can offer AI capabilities based on the company's private database to end-users or developers, without exposing your data and AI logic design. This ensures a secure and sustainable service delivery that meets business objectives.
Text-generation application
These applications are used to generate high-quality text, such as articles, summaries, translations, etc., by calling the completion-messages API and sending user input to obtain generated text results. The model parameters and prompt templates used for generating text depend on the developer's settings in the Dify Prompt Arrangement page.
You can find the API documentation and example requests for this application in Applications -> Access API.
For example, here is a sample call an API for text generation:
curl --location --request POST 'https://api.dify.dev/v1/completion-messages' \
--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"inputs": {},
"query": "Hi",
"response_mode": "streaming",
"user": "abc-123"
Conversational applications
Suitable for most scenarios, conversational applications engage in continuous dialogue with users in a question-and-answer format. To start a conversation, call the chat-messages API and maintain the session by continuously passing in the returned conversation_id.
You can find the API documentation and example requests for this application in Applications -> Access API.
For example, here is a sample call an API for chat-messages:
curl --location --request POST 'https://api.dify.dev/v1/chat-messages' \
--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"inputs": {},
"query": "eh",
"response_mode": "streaming",
"conversation_id": "1c7e55fb-1ba2-4e10-81b5-30addcea2276"
"user": "abc-123"
}'
Logs & Annotations
Please ensure that your application complies with local regulations when collecting user data. The common practice is to publish a privacy policy and obtain user consent.
The Logs feature is designed to observe and annotate the performance of Dify applications. Dify records logs for all interactions with the application, whether through the WebApp or API. If you are a Prompt Engineer or LLM operator, it will provide you with a visual experience of LLM application operations.
Using the Logs Console
You can find the Logs in the left navigation of the application. This page typically displays:
-
Interaction records between users and AI within the selected timeframe
-
The results of user input and AI output, which for conversational applications are usually a series of message flows
-
Ratings from users and operators, as well as improvement annotations from operators
The logs currently do not include interaction records from the Prompt debugging process.
Improvement Annotations
These annotations will be used for model fine-tuning in future versions of Dify to improve model accuracy and response style. The current preview version only supports annotations.

Clicking on a log entry will open the log details panel on the right side of the interface. In this panel, operators can annotate an interaction:
-
Give a thumbs up for well-performing messages
-
Give a thumbs down for poorly-performing messages
-
Mark improved responses for improvement, which represents the text you expect AI to reply with
Please note that if multiple administrators in the team annotate the same log entry, the last annotation will overwrite the previous ones.
WEB APPLICATION
Overview
Web applications are for application consumers. When an application developer creates an application in Dify, he will get a corresponding web application. Users of the web application can use it without logging in. The web application is adapted to different sizes of devices: PC, tablet and mobile.
The content of the web application is consistent with the configuration published by the application. When the configuration of the application is modified and the "Publish" button is clicked on the prompt word layout page of the application to publish, the content of the web application will also be updated according to the configuration of the current application.
We can enable and disable access to the web application on the application overview page, and modify the site information of the web application:
-
Icon
-
Name
-
Application Description
-
Interface Language
-
Copyright Information
-
Privacy Policy Link
The functional performance of the web application depends on whether the developer enables this function when compiling the application, for example:
-
Conversation remarks
-
Variables filled in before the conversation
-
Follow-up
-
Speech to text
-
More answers like this (Text Generation apps)
-
...
In the following chapters, we will introduce the two types of web applications separately:
-
Text Generator
-
Conversational
Text Generator
The text generation application is an application that automatically generates high-quality text according to the prompts provided by the user. It can generate various types of text, such as article summaries, translations, etc.
Text generation applications support the following features:
-
1.Run it once.
-
2.Run in batches.
-
3.Save the run results.
-
4.Generate more similar results.
Let's introduce them separately.
Run it once
Enter the query content, click the run button, and the result will be generated on the right, as shown in the following figure:

In the generated results section, click the "Copy" button to copy the content to the clipboard. Click the "Save" button to save the content. You can see the saved content in the "Saved" tab. You can also "like" and "dislike" the generated content.
Run in batches
Sometimes, we need to run an application many times. For example: There is a web application that can generate articles based on topics. Now we want to generate 100 articles on different topics. Then this task has to be done 100 times, which is very troublesome. Also, you have to wait for one task to complete before starting the next one.
In the above scenario, the batch operation function is used, which is convenient to operate (enter the theme into a csv file, only need to be executed once), and also saves the generation time (multiple tasks run at the same time). The usage is as follows:
Step 1 Enter the batch run page
Click the "Run Batch" tab to enter the batch run page.

Step 2 Download the template and fill in the content
Click the Download Template button to download the template. Edit the template, fill in the content, and save as a .csv file.

Step 3 Upload the file and run

If you need to export the generated content, you can click the download "button" in the upper right corner to export as a csv file.
Save run results
Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content.

Generate more similar results
If the "more similar" function is turned on when applying the arrangement. Clicking the "more similar" button in the web application generates content similar to the current result. As shown below:

Conversation Application
Conversational applications use a question-and-answer model to maintain a dialogue with the user. Conversational applications support the following capabilities (please confirm that the following functions are enabled when the application is programmed):
-
Variables to fill in before the dialog. Create, pin, and delete conversations.
-
Conversation remarks.
-
Follow-up.
-
Speech to text.
Variables filled in before the dialog
If you have the requirement to fill in variables when you apply the layout, you need to fill in the information according to the prompts before entering the dialog window:

Fill in the necessary content and click the "Start Chat" button to start chatting.

Move to the AI's answer, you can copy the content of the conversation, and give the answer "like" and "dislike".

Conversation creation, pinning and deletion
Click the "New Conversation" button to start a new conversation. Move to a session, and the session can be "pinned" and "deleted".

Conversation remarks
If the "Conversation remarks" function is enabled when the application is programmed, the AI application will automatically initiate the first sentence of the dialogue when creating a new dialogue:

Follow-up
If the "Follow-up" function is enabled during the application arrangement, the system will automatically generate 3 related question suggestions after the dialogue:

Speech to text
If the "Speech to Text" function is enabled during application programming, you will see the voice input icon in the input box on the web application side, click the icon to convert the voice input into text:
Please make sure that the device environment you are using is authorized to use the microphone.


EXPLORE
Chat
Chat in explore is a conversational application used to explore the boundaries of Dify's capabilities.
When we talk to large natural language models, we often encounter situations where the answers are outdated or invalid. This is due to the old training data of the large model and the lack of networking capabilities. Based on the large model, Chat uses agents to capabilities and some tools endow the large model with the ability of online real-time query.

Chat supports the use of plugins and datasets.
Use plugins
LLM(Large language model)cannot be networked and invoke external tools. But this cannot meet the actual usage scenarios, such as:
-
When we want to know the weather today, we need to be connected to the Internet.
-
When we want to summarize the content of a web page, we need to use an external tool: read the content of the web page.
The above problem can be solved by using the agent mode: when the LLM cannot answer the user's question, it will try to use the existing plugins to answer the question.
In Dify, we use different proxy strategies for different models. The proxy strategy used by OpenAI's model is GPT function call. Another model used is ReACT. The current test experience is that the effect of GPT function call is better. To know more, you can read the link below:
Currently we support the following plugins:
-
Google Search. The plugin searches Google for answers.
-
Web Reader. The plugin reads the content of linked web pages.
-
Wikipedia. The plugin searches Wikipedia for answers.
We can choose the plugins needed for this conversation before the conversation starts.

If you use the Google search plugin, you need to configure the SerpAPI key.

Configured entry:

Use datasets
Chat supports datasets. After selecting the datasets, the questions asked by the user are related to the content of the data set, and the model will find the answer from the data set.
We can select the datasets needed for this conversation before the conversation starts.

The process of thinking
The thinking process refers to the process of the model using plugins and datasets. We can see the thought process in each answer.

ADVANCED
Datasets&Index
Most language models use outdated training data and have length limitations for the context of each request. For example, GPT-3.5 is trained on corpora from 2021 and has a limit of approximately 4k tokens per request. This means that developers who want their AI applications to be based on the latest and private context conversations must use techniques like embedding.
Dify' dataset feature allows developers (and even non-technical users) to easily manage datasets and automatically integrate them into AI applications. All you need to do is prepare text content, such as:
-
Long text content (TXT, Markdown, JSONL, or even PDF files)
-
Structured data (CSV, Excel, etc.)
Additionally, we are gradually supporting syncing data from various data sources to datasets, including:
-
Notion
-
GitHub
-
Databases
-
Webpages
-
...
Practice: If your company wants to build an AI customer service assistant based on existing knowledge bases and product documentation, you can upload the documents to a dataset in Dify and create a conversational application. This might have taken you several weeks in the past and been difficult to maintain continuously.
Datasets and Documents
In Dify, datasets (Datasets) are collections of documents (Documents). A dataset can be integrated as a whole into an application to be used as context. Documents can be uploaded by developers or operations staff, or synced from other data sources (typically corresponding to a file unit in the data source).
Steps to upload a document:
-
1.Upload your file, usually a long text file or a spreadsheet
-
2.Segment, clean, and preview
-
3.Dify submits it to the LLM provider for embedding as vector data and storage
-
4.Set metadata for the document
-
5.Ready to use in the application!
How to write a good dataset description
When multiple datasets are referenced in an application, AI uses the description of the datasets and the user's question to determine which dataset to use to answer the user's question. Therefore, a well-written dataset description can improve the accuracy of AI in selecting datasets.
The key to writing a good dataset description is to clearly describe the content and characteristics of the dataset. It is recommended that the dataset description begin with this: Useful only when the question you want to answer is about the following: specific description. Here is an example of a real estate dataset description:
Useful only when the question you want to answer is about the following: global real estate market data from 2010 to 2020. This data includes information such as the average housing price, property sales volume, and housing types for each city. In addition, this dataset also includes some economic indicators such as GDP and unemployment rate, as well as some social indicators such as population and education level. These indicators can help analyze the trends and influencing factors of the real estate market. With this data, we can understand the development trends of the global real estate market, analyze the changes in housing prices in various cities, and understand the impact of economic and social factors on the real estate market.
Create a dataset
-
1.Click on datasets in the main navigation bar of Dify. On this page, you can see the existing datasets. Click on "Create Dataset" to enter the creation wizard.
-
2.If you have already prepared your files, you can start by uploading the files.
-
3.If you haven't prepared your documents yet, you can create an empty dataset first.
Uploading Documents By upload file
-
1.Select the file you want to upload.We support batch uploads.
-
2.Preview the full text
-
3.Perform segmentation and cleaning
-
4.Wait for Dify to process the data for you; this step usually consumes tokens in the LLM provider
Modify Documents
Modify Documents For technical reasons, if developers make the following changes to documents, Dify will create a new document for you, and the old document will be archived and deactivated:
-
1.Adjust segmentation and cleaning settings
-
2.Re-upload the file
Maintain Datasets via API
TODO
Dataset Settings
Click Settings in the left navigation of the dataset. You can change the following settings for the dataset:
-
Dataset name for identifying a dataset
-
Dataset description to allow AI to better use the dataset appropriately. If the description is empty, Dify's automatic indexing strategy will be used.
-
Permissions can be set to Only Me or All Team Members. Those without permissions cannot view and edit the dataset.
-
Indexing mode: In High Quality mode, OpenAI's embedding interface will be called to process and provide higher accuracy when users query. In Economic mode, offline vector engines, keyword indexing, etc. will be used to reduce accuracy without consuming tokens.
Note: Upgrading the indexing mode from Economic to High Quality will incur additional token consumption. Downgrading from High Quality to Economic will not consume tokens.
Integrate into Applications
Once the dataset is ready, it needs to be integrated into the application. When the AI application processes will automatically use the associated dataset content as a reference context.
-
1.Go to the application - Prompt Arrangement page
-
2.In the context options, select the dataset you want to integrate
-
3.Save the settings to complete the integration
Q&A
Q: What should I do if the PDF upload is garbled?
A: If your PDF parsing appears garbled under certain formatted contents, you could consider converting the PDF to Markdown format, which currently offers higher accuracy, or you could reduce the use of images, tables, and other formatted content in the PDF. We are researching ways to optimize the experience of using PDFs.
Q: How does the consumption mechanism of context work?
A: With a dataset added, each query will consume segmented content (currently embedding two segments) + question + prompt + chat history combined. However, it will not exceed model limitations, such as 4096.
Q: Where does the embedded dataset appear when asking questions?
A: It will be embedded as context before the question.
Q: Is there any priority between the added dataset and OpenAI's answers?
A: The dataset serves as context and is used together with questions for LLM to understand and answer; there is no priority relationship.
Q: Why can I hit in test but not in application?
A: You can troubleshoot issues by following these steps:
-
1.Make sure you have added text on the prompt page and clicked on the save button in the top right corner.
-
2.Test whether it responds normally in the prompt debugging interface.
-
3.Try again in a new WebApp session window.
-
4.Optimize your data format and quality. For practice reference, visit https://github.com/langgenius/dify/issues/90 If none of these steps solve your problem, please join our community for help.
Q: Will APIs related to hit testing be opened up so that dify can access knowledge bases and implement dialogue generation using custom models?
A: We plan to open up Webhooks later on; however, there are no current plans for this feature. You can achieve your requirements by connecting to any vector database.
Q: How do I add multiple datasets?
A: Due to short-term performance considerations, we currently only support one dataset. If you have multiple sets of data, you can upload them within the same dataset for use.
Sync from Notion
Dify dataset supports importing from Notion and setting up Sync so that data is automatically synced to Dify after updates in Notion.
Authorization verification
1. When creating a dataset, select the data source, click Sync from Notion--Go to connect, and complete the authorization verification according to the prompt.
2. You can also: click Settings--Data Sources--Add a Data Source, click Notion Source Connect to complete authorization verification.

Import Notion data
After completing authorization verification, go to the dataset creation page, click Sync from Notion, and select the required authorization page to import.
Segmentation and cleaning
Next, select your segmentation settings and indexing method, save and process. Wait for Dify to process this data, usually this step requires token consumption in LLM providers. Dify not only supports importing ordinary page types but also summarizes and saves the page attributes under the database type.
Note: Images and files are not currently supported for import. Table data will be converted to text.
Sync Notion data
If your Notion content has been modified, you can click Sync directly on the Dify dataset document list page to sync the data with one click(Please note that each time you click, the current content will be synchronized). This step requires token consumption.

Plugins
Plugins is an upcoming feature of Dify. You can incorporate plugins into your App orchestration and access AI applications with plugin capabilities through an API or WebApp. Dify is compatible with the ChatGPT Plugins standard and provides some native plugins.
Based on WebApp Template
If developers are developing new products from scratch or in the product prototype design phase, you can quickly launch AI sites using Dify. At the same time, Dify hopes that developers can fully freely create different forms of front-end applications. For this reason, we provide:
-
SDK for quick access to the Dify API in various languages
-
WebApp Template for WebApp development scaffolding for each type of application
The WebApp Templates are open source under the MIT license. You are free to modify and deploy them to achieve all the capabilities of Dify or as a reference code for implementing your own App.
The fastest way to use the WebApp Template is to click "Use this template" on GitHub, which is equivalent to forking a new repository. Then you need to configure the Dify App ID and API Key, like this:
export const APP_ID = ''
export const API_KEY = ''
More config in config/index.ts:
export const APP_INFO: AppInfo = {
"title": 'Chat APP',
"description": '',
"copyright": '',
"privacy_policy": '',
"default_language": 'zh-Hans'
}
export const isShowPrompt = true
export const promptTemplate = ''
Each WebApp Template provides a README file containing deployment instructions. Usually, WebApp Templates contain a lightweight backend service to ensure that developers' API keys are not directly exposed to users.
These WebApp Templates can help you quickly build prototypes of AI applications and use all the capabilities of Dify. If you develop your own applications or new templates based on them, feel free to share with us.
Model Configuration
Dify currently supports major model providers such as OpenAI's GPT series. Here are the model providers we currently support:
-
OpenAI
-
Azure OpenAI Service
-
Anthropic
-
Hugging Face Hub (Coming soon)
Based on technology developments and user needs, we will continue adding support for more LLM providers over time.
Trial Hosted Models
We provide trial quotas for different models for Dify cloud service users. Please set up your own model provider before the trial quota runs out, otherwise it may impact normal use of your application.