
Prompt Engineering
Master the use of In TalentBot Enterprise LLMops AIops for orchestrating applications and practicing Prompt Engineering, and build high-value AI applications with the two built-in application types.
The core concept of In TalentBot Enterprise LLMops AIops is the declarative definition of AI applications. Everything including Prompts, context, plugins, etc. can be described in a YAML file (which is why it is called In TalentBot Enterprise LLMops AIops). It ultimately presents a single API or out-of-the-box WebApp.
At the same time, In TalentBot Enterprise LLMops AIops provides an easy-to-use Prompt orchestration interface where developers can visually orchestrate various application features based on Prompts. Doesn't it sound simple?
For both simple and complex AI applications, good Prompts can effectively improve the quality of model output, reduce error rates, and meet the needs of specific scenarios.In TalentBot Enterprise LLMops AIopscurrently provides two common application forms: conversational and text generator. This section will guide you through visually orchestrating AI applications.
Application Orchestration Steps
Determine application scenarios and functional requirements
Design and test Prompts and model parameters
Step 1
Step 2
Step 3
Orchestrate Prompts
Step 4
Publish the application
Step 5
Observe and continuously iterate
The Differences between Application Types
In TalentBot Enterprise LLMops AIops Text generation and conversation applications in have slight differences in prompt orchestration. Conversation applications require incorporating "conversation lifecycle" to meet more complex user scenarios and context management needs.
Prompt Engineering has developed into a field with tremendous potential, worthy of continuous exploration. Please continue reading to learn about the orchestration guidelines for both types of applications.
Hands-on Practice
TODO
Extended Reading
Text Generator
Text generation applications are applications that can automatically generate high-quality text based on prompts provided by users. They can generate various types of text, such as article summaries, translations, etc.
Applicable scenarios
Text generation applications are suitable for scenarios that require a large amount of text creation, such as news media, advertising, SEO, marketing, etc. They can provide efficient and fast text generation services for these industries, reduce labor costs, and improve production efficiency.
How to compose
Text generation applications supports: prefix prompt words, variables, context, and generating more similar content.
Here, we use a translation application as an example to introduce the way to compose a text generation applications
Step 1: Create the application
Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Text Generator" as the application type.

Step 2: Compose the Application
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “Prompt Eng.” to compose the application.

2.1 Fill in Prefix Prompts
Prompts are used to give a series of instructions and constraints to the AI response. Form variables can be inserted, such as {{input}}. The value of variables in the prompts will be replaced with the value filled in by the user.
The prompt we are filling in here is: Translate the content to: {{language}}. The content is as follows:

2.3 Adding Future: Generate more like this
Generating more like this allows you to generate multiple texts at once, which you can edit and continue generating from. Click on the "Add Future" button in the upper left corner to enable this feature.
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:

1
2
3
4
5

2.2 Adding Context
If the application wants to generate content based on private contextual conversations, our dataset feature can be used. Click the "Add" button in the context to add a dataset.

2.4 Debugging
We debug on the right side by entering variables and querying content. Click the "Run" button to view the results of the operation.

2.5 Publish
After debugging the application, click the "Publish" button in the upper right corner to save the current settings
Share Application
You can find the sharing address of the application on the overview page. Click the "Preview" button to preview the shared application. Click the "Share" button to obtain the sharing link address. Click the "Settings" button to set the information of the shared application.

If you want to customize the application shared outside, you can Fork our open source WebApp template. Based on the template, you can modify the application to meet your specific situation and style requirements.
Conversation Application
Conversation applications use a one-question-one-answer mode to have a continuous conversation with the user.
Applicable scenarios
Conversation applications can be used in fields such as customer service, online education, healthcare, financial services, etc. These applications can help organizations improve work efficiency, reduce labor costs, and provide a better user experience.
How to compose
Conversation applications supports: prompts, variables, context, opening remarks, and suggestions for the next question.
Here, we use a interviewer application as an example to introduce the way to compose a conversation applications.
Step 1 Create an application
Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Chat App" as the application type.

Step 2: Compose the Application
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “Prompt Eng.” to compose the application.

2.1 Fill in Prompts
Prompts are used to give a series of instructions and constraints to the AI response. Form variables can be inserted, such as {{input}}. The value of variables in the prompts will be replaced with the value filled in by the user.
The prompt we are filling in here is:
I want you to be the interviewer for the {{jobName}} position. I will be the candidate, and you will ask me interview questions for the position of {{jobName}} developer. I hope you will only answer as the interviewer. Don't write all the questions at once. I wish for you to only interview me. Ask me questions and wait for my answers. Don't write explanations. Ask me one by one like an interviewer and wait for my answer.
When I am ready, you can start asking questions.
For a better experience, we will add an opening dialogue: "Hello, {{name}}. I'm your interviewer, Bob. Are you ready?"
To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature:
And then edit the opening remarks:

2.3 Debugging
We fill in the user input on the right side and debug the input content.
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:
We support the GPT-4 model.
2.4 Publish
After debugging the application, click the "Publish" button in the upper right corner to save the current settings.
1
2
3
4



2.2 Adding Context
If an application wants to generate content based on private contextual conversations, it can use our dataset feature. Click the "Add" button in the context to add a dataset.


Share Application
On the overview page, you can find the sharing address of the application. Click the "Preview" button to preview the shared application. Click the "Share" button to get the sharing link address. Click the "Settings" button to set the shared application information.

If you want to customize the application that you share, you can Fork our open source WebApp template. Based on the template, you can modify the application to meet your specific needs and style requirements.