top of page

Prompt Engineering

Master the use of In TalentBot Enterprise LLMops AIops for orchestrating applications and practicing Prompt Engineering, and build high-value AI applications with the two built-in application types.

The core concept of In TalentBot Enterprise LLMops AIops is the declarative definition of AI applications. Everything including Prompts, context, plugins, etc. can be described in a YAML file (which is why it is called In TalentBot Enterprise LLMops AIops). It ultimately presents a single API or out-of-the-box WebApp.

At the same time, In TalentBot Enterprise LLMops AIops provides an easy-to-use Prompt orchestration interface where developers can visually orchestrate various application features based on Prompts. Doesn't it sound simple?

For both simple and complex AI applications, good Prompts can effectively improve the quality of model output, reduce error rates, and meet the needs of specific scenarios.In TalentBot Enterprise LLMops AIopscurrently provides two common application forms: conversational and text generator. This section will guide you through visually orchestrating AI applications.

Application Orchestration Steps

  1. Determine application scenarios and functional requirements

  2. Design and test Prompts and model parameters

  3. Orchestrate Prompts

  4. Publish the application

  5. Observe and continuously iterate

Hands-on Practice

TODO

The Differences between Application Types

In TalentBot Enterprise LLMops AIops Text generation and conversation applications in have slight differences in prompt orchestration. Conversation applications require incorporating "conversation lifecycle" to meet more complex user scenarios and context management needs.

Prompt Engineering has developed into a field with tremendous potential, worthy of continuous exploration. Please continue reading to learn about the orchestration guidelines for both types of applications.

Text Generator

Text generation applications are applications that can automatically generate high-quality text based on prompts provided by users. They can generate various types of text, such as article summaries, translations, etc.

Applicable scenarios

Text generation applications are suitable for scenarios that require a large amount of text creation, such as news media, advertising, SEO, marketing, etc. They can provide efficient and fast text generation services for these industries, reduce labor costs, and improve production efficiency.

How to compose

Text generation applications supports: prefix prompt words, variables, context, and generating more similar content.

Here, we use a translation application as an example to introduce the way to compose a text generation applications

Step 1: Create the application

Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Text Generator" as the application type.

image_edited.jpg

Step 2: Compose the Application

After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “Prompt Eng.” to compose the application.

image_edited.jpg

2.1 Fill in Prefix Prompts

Prompts are used to give a series of instructions and constraints to the AI response. Form variables can be inserted, such as {{input}}. The value of variables in the prompts will be replaced with the value filled in by the user.

The prompt we are filling in here is: Translate the content to: {{language}}. The content is as follows:

image.png

2.3 Adding Future: Generate more like this

Generating more like this allows you to generate multiple texts at once, which you can edit and continue generating from. Click on the "Add Future" button in the upper left corner to enable this feature.

If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:

image.png

1

2

3

4

5

image.png

2.2 Adding Context

If the application wants to generate content based on private contextual conversations, our dataset feature can be used. Click the "Add" button in the context to add a dataset.

image.png

2.4 Debugging

We debug on the right side by entering variables and querying content. Click the "Run" button to view the results of the operation.

image.png

2.5 Publish

After debugging the application, click the "Publish" button in the upper right corner to save the current settings.

Share Application

You can find the sharing address of the application on the overview page. Click the "Preview" button to preview the shared application. Click the "Share" button to obtain the sharing link address. Click the "Settings" button to set the information of the shared application.

image.png

If you want to customize the application shared outside, you can Fork our open source WebApp template. Based on the template, you can Cyberwisdom TalentBot LLMops the application to meet your specific situation and style requirements.

Conversation Application

Conversation applications use a one-question-one-answer mode to have a continuous conversation with the user.

Applicable scenarios

Conversation applications can be used in fields such as customer service, online education, healthcare, financial services, etc. These applications can help organizations improve work efficiency, reduce labor costs, and provide a better user experience.

How to compose

Conversation applications supports: prompts, variables, context, opening remarks, and suggestions for the next question.

Here, we use a interviewer application as an example to introduce the way to compose a conversation applications.

image_edited.jpg

Step 1 Create an application

Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Chat App" as the application type.

Create Application

Step 2: Compose the Application

After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “Prompt Eng.” to compose the application.

image_edited.jpg

2.1 Fill in Prompts

Prompts are used to give a series of instructions and constraints to the AI response. Form variables can be inserted, such as {{input}}. The value of variables in the prompts will be replaced with the value filled in by the user.

The prompt we are filling in here is:

I want you to be the interviewer for the {{jobName}} position. I will be the candidate, and you will ask me interview questions for the position of {{jobName}} developer. I hope you will only answer as the interviewer. Don't write all the questions at once. I wish for you to only interview me. Ask me questions and wait for my answers. Don't write explanations. Ask me one by one like an interviewer and wait for my answer.

When I am ready, you can start asking questions.

For a better experience, we will add an opening dialogue: "Hello, {{name}}. I'm your interviewer, Bob. Are you ready?"

To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature:

And then edit the opening remarks:

image.png

2.3 Debugging

We fill in the user input on the right side and debug the input content.

image.png

2.4 Publish

After debugging the application, click the "Publish" button in the upper right corner to save the current settings.

1

2

3

4

image.png
image.png
image.png

2.2 Adding Context

If an application wants to generate content based on private contextual conversations, it can use our dataset feature. Click the "Add" button in the context to add a dataset.

image.png

If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:

We support the GPT-4 model.

Share Application

On the overview page, you can find the sharing address of the application. Click the "Preview" button to preview the shared application. Click the "Share" button to get the sharing link address. Click the "Settings" button to set the shared application information.

image_edited.jpg

If you want to customize the application that you share, you can Fork our open source WebApp template. Based on the template, you can Cyberwisdom TalentBot LLMops the application to meet your specific needs and style requirements.

External-data-tool

Previously, Datasets&Index allowed developers to directly upload long texts in various formats and structured data to build datasets, enabling AI applications to converse based on the latest context uploaded by users. With this update, the external data tool empowers developers to use their own search capabilities or external data such as internal knowledge bases as the context for LLMs. This is achieved by extending APIs to fetch external data and embedding it into Prompts. Compared to uploading datasets to the cloud, using external data tools offers significant advantages in ensuring the security of private data, customizing searches, and obtaining real-time data.

What does it do?

When end-users make a request to the conversational system, the platform backend triggers the external data tool (i.e., calling its own API), which queries external information related to the user's question, such as employee profiles, real-time records, etc. The tool then returns through the API the portions relevant to the current request. The platform backend will assemble the returned results into text as context injected into the Prompt, in order to produce replies that are more personalized and meet user needs more accurately.

Quick Start

1.Before using the external data tool, you need to prepare an API and an API Key for authentication. Head to External_data_tool.

2.Cyberwisdom TalentBot LLMops offers centralized API management; After adding API extension configurations in the settings interface, they can be directly utilized across various applications on Cyberwisdom TalentBot LLMops.

1_edited.jpg

API-based Extension

3.Taking "Query Weather" as an example, enter the name, API endpoint, and API Key in the "Add New API-based Extension" dialog box. After saving, we can then call the API.

2_edited.jpg

Weather Inquiry

4.On the prompt orchestration page, click the "+ Add" button to the right of "Tools," and in the "Add Tool" dialog that opens, fill in the name and variable name (the variable name will be referenced in the Prompt, so please use English), as well as select the API-based extension added in Step 2.

3_edited.jpg

External_data_tool

5.In the prompt orchestration box, we can assemble the queried external data into the Prompt. For instance, if we want to query today's weather in London, we can add a variable named location, enter "London", and combine it with the external data tool's extension variable name weather_data. The debug output would be as follows:

4.jpg

Weather_search_tool

In the Prompt Log, we can also see the real-time data returned by the API:

6.jpg

Prompt Log

Moderation

In our interactions with AI applications, we often have stringent requirements in terms of content security, user experience, and legal regulations. At this point, we need the "Sensitive Word Review" feature to create a better interactive environment for end-users. On the prompt orchestration page, click "Add Function" and locate the "Content Review" toolbox at the bottom:

7_edited.jpg

Content moderation

Call the OpenAI Moderation API

OpenAI, along with most companies providing LLMs, includes content moderation features in their models to ensure that outputs do not contain controversial content, such as violence, sexual content, and illegal activities. Additionally, OpenAI has made this content moderation capability available, which you can refer to at https://platform.openai.com/docs/guides/moderation/overview.

Now you can also directly call the OpenAI Moderation API on Cyberwisdom TalentBot LLMops; you can review either input or output content simply by entering the corresponding "preset reply."

8.jpg

OpenAI Moderation

Keywords

Developers can customize the sensitive words they need to review, such as using "kill" as a keyword to perform an audit action when users input. The preset reply content should be "The content is violating usage policies." It can be anticipated that when a user inputs a text snippet containing "kill" at the terminal, it will trigger the sensitive word review tool and return the preset reply content.

10.jpg

Keywords

Moderation Extension

Different enterprises often have their own mechanisms for sensitive word moderation. When developing their own AI applications, such as an internal knowledge base ChatBot, enterprises need to moderate the query content input by employees for sensitive words. For this purpose, developers can write an API extension based on their enterprise's internal sensitive word moderation mechanisms, specifically referring to Moderation Extension, which can then be called on Cyberwisdom TalentBot LLMops to achieve a high degree of customization and privacy protection for sensitive word review.

11_edited.jpg

Moderation Extension

bottom of page