arrow_back

Conversational AI | Testing and Logging in Conversational Agents

로그인 가입
700개 이상의 실습 및 과정 이용하기

Conversational AI | Testing and Logging in Conversational Agents

실습 1시간 30분 universal_currency_alt 크레딧 1개 show_chart 중급
info 이 실습에는 학습을 지원하는 AI 도구가 통합되어 있을 수 있습니다.
700개 이상의 실습 및 과정 이용하기

Course overview

Gone are the bots of the past which interpreted inquiries incorrectly or couldn't maintain context within a conversation. Google's Contact Center as a Service (CCaaS) provides all the tools, services, and APIs you'll need to build your Conversational agent so you can hold intelligent conversations with customers reaching out to you for assistance. In this course you'll learn how to utilize Conversational Agents built in features to test, debug, and improve your agent. We will show you how to create and maintain test cases, use the validations tool to improve agent functionality, review conversation history logs, and use the console to test and debug.

Objectives

In this lab, you'll explore some of the testing and logging tools available for developing an agent in Conversational Agents. By the end of this module, you'll be able to:

  • Use Conversational Agent tools for troubleshooting.
  • Use Google Cloud tools to debug your conversational agent.
  • Review logs generated by Conversational agent activity.

Resources

The following are some resources that may help you complete the lab components of this course:

Setup and requirements

Setting up

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Make sure you signed into Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, img/time.png and make sure you can finish in that time block.

  1. When ready, click img/start_lab.png.

  2. Note your lab credentials. You will use them to sign in to the Google Cloud Console. img/open_google_console.png

  3. Click Open Google Console.

  4. Click Use another account and copy/paste credentials for this lab into the prompts.

  1. Accept the terms and skip the recovery resource page.

Assumption: You've already logged into Google Cloud before continuing with the steps below.

  1. In the Google Cloud Console, enable the Dialogflow API.

  2. Navigate to AI Applications and click on Continue and activate the API.

  3. To create a new conversational agent, go to the AI Applications Apps Console choose Conversational agent as the app type, and click Create.

  4. A new page for Conversational Agents opens. On this page, you should see a pop-up asking you to select a project.

  5. Search the list in the pop-up for the project that matches your assigned Project ID (i.e., ) for this lab. Click on .

Note: If you don't see your Project ID listed, look at the user on the right side to confirm that you are using Conversational Agents as student.

ca_console.png

  1. Select Build your own on the Get started with Conversational Agents popup.

  2. On the Create agent page, for the agent display name enter Cloudio-cx.

  3. Select location as .

  4. Ensure the timezone and default language are set appropriately. Set the Conversation start to Flow, and then click on Create button.

    Once the agent is created, you will see the design and configuration portion of the Conversational Agents UI.

Task 1. Importing a .blob Conversational agent file

In this task, you can import the conversational agent you worked on for an earlier lab or you can import the conversational agent quick start. Choose the option you feel will help you get the most out of this lab.

Task 2. Creating a test case

In this task, you'll create a new test case and save it so that you can run it again later without typing in everything all over again.

Let's start by viewing the Test Cases section in the Conversational Agents.

  1. To create the test case, click on Toggle Simulator to open the Simulator pane.

  2. Enter the customer utterance, I want to upgrade my tier, for the ChangeTier intent,

    The Conversational Agent should respond with "Which tier do you want?"

  3. Enter gold.

    The Conversational Agent should respond with What's the phone number on your account?

  4. Enter 4155551212.

    The Conversational Agent should respond with Your tier is now gold. Anything else?.

  5. Click the Create test case button above the simulator (in the upper right of the Simulator pane).

    Save Test Case

  6. Enter the following in the Create Test Case pop-up window:

    • Enter the Display Name as Change Tier.

    • Expand the Basic Settings section and enter the Tags as #gold.

    • In Notes field enter Customer wants to upgrade to the gold tier.

  1. Click the Save test case button above the simulator.

    Save Test Case

  2. From the left navigation menu, navigate to Test cases under the TEST & EVALUATE tab.

  3. Select your test case named Change Tier from the list.

  1. However, the Last Run State pane is currently blank. To execute your test case, click the Run selected button and select the Draft environment.

  2. Notice the Last tested pane is now populated with the test you just ran. The Last Run State now shows Pass. The timestamp is more recent than your first test.

    Test Case

You can open the View running operations list by clicking on the hourglass icon in the upper right.
  1. Click on your test case to Edit it.

  2. Notice there are two fields, one for Optional Settings and one for Expected conversation.

  1. Notice the Expected conversation script contains all of the conversation details you entered into the simulator.

  2. Click on t the first user utterance, I want to upgrade my tier, and replace it with I want to change tier.

  3. Click Save.

    Updated Test Case

    What does this do? You're probably thinking, correctly, that Conversational Agents might potentially match a different intent if it find a training phrase that more closely matches the customer utterance.

    In this particular case, it's ok. Conversational Agent runs through the test case and matches it to the ChangeTier intent just as before.

    This is an easy way for you to make simple modifications to your test cases without rebuilding them, but you'll want to be careful not to make changes that result in a different scenario entirely.

Task 3. Reviewing validation data

  1. Navigate to Flows, open the Manage tab, and then click Validations under the Feedback tab
    Are there any validation errors or warnings?

  2. Click on the Flow dropdown and choose Default Start Flow if not already preset.

    Are there any errors or warnings now?

    Note: You may need to refresh the page to see errors and warnings.
  3. Expand the Intent: subscribe intent under Intent issues.

  4. Notice it contains the warning, The annotated text 'Wang' in training phrase 'i want to get a new account for 9255551212 with silver tier and last name is Wang' does not correspond to entity type '@sys.last-name”.

    Why do you think this is?

    Hint: Review training phrases and the system entities documentation.
  5. Scroll to the Flow issues section.

  6. Expand Page: Anything Else.

    You should see a warning, The page cannot reach END_FLOW/END_SESSION or another flow.

    What do you think this means?

Hint: The Anything Else page does not explicitly define an end page or flow. That's ok because we want Cloudio to wait for another request from the user (versus terminate the session).

Task 4. Getting to Google Cloud execution logs

In this task, you'll look at execution logs generated for your Conversational Agent which will help you to debug any issues.

  1. In the Google Cloud console, type "Logging" in the search bar, and then click Logging in the resulting options.

    The Logging Explorer opens by default. Don't worry if you don't see much there yet. There will be once we do some more testing.

  1. Click on Last 1 hour (or if it says Last, click on that).

  2. In Relative time enter 15m and press Enter.

  3. Under All severities, look for Debug type logs if you have them. If not, try another log severity type (for instance, Info type).

  4. Notice in the Query pane, you can construct a query to include only logs with specific type or other criteria. For instance,

    severity=INFO

    This filters out everything except INFO type logs.

    You may want to inspect DEBUG type log for webhooks in your Conversational Agent to see what the parameters were and how long they took.

    Sometimes you'll see DEBUG level logs that give you an indication of simple typos in your code. For example, let's say you forgot to set the value of the tier parameter in your cloud function.

    You might get a log such as the following indicating your cloud function crashed:

    { "textPayload": "Function execution took 905 ms, finished with status: 'crash'", "insertId": "000000-0b19678b-a509-436a-b174-f7d36b722956", "resource": { "type": "cloud_function", "labels": { "function_name": "cloudioAccountUpdate", "region": "{{{ project_0.default_region | REGION }}}", "project_id": "qwiklabs-gcp-04-31bc7acc8e71" } }, "timestamp": "2021-05-18T01:04:36.237217533Z", "severity": "DEBUG", "labels": { "execution_id": "iwjy0cpz8964" }, "logName": "projects/{{{project_0.project_id | Project ID}}}/logs/cloudfunctions.googleapis.com%2Fcloud-functions", "trace": "projects/{{{project_0.project_id | Project ID}}}/traces/ca514f5399280afec54e299a52c81c1b", "receiveTimestamp": "2021-05-18T01:04:37.318856769Z" }
  5. Compare the above with an example of a properly executed Cloud Function log:

    { "textPayload": "Function execution took 191 ms, finished with status code: 200", "insertId": "000000-ea4a1e48-80c7-497f-b2f4-fdfffae11123", "resource": { "type": "cloud_function", "labels": { "function_name": "cloudioAccountUpdate", "project_id": "{{{project_0.project_id | Project ID}}}", "region": "{{{ project_0.default_region | REGION }}}", } }, "timestamp": "2021-05-16T16:36:54.804028817Z", "severity": "DEBUG", "labels": { "execution_id": "ms1uwwnuuqmu" }, "logName": "projects/{{{project_0.project_id | Project ID}}}/logs/cloudfunctions.googleapis.com%2Fcloud-functions", "trace": "projects/{{{project_0.project_id | Project ID}}}/traces/fb66f7bf07b139c4ef277d9954b97023", "receiveTimestamp": "2021-05-16T16:37:05.695763035Z" }


Task 5. Debugging from the Conversational Agents interface

Some debugging efforts are best done in the Conversational Agents simulator pane and good old fashioned detective work.

  1. Navigate to the ChangeTier flow under Flows > Build tab in the Conversational Agents.

  2. Click on the Get Tier page.

  3. Click on the phone_number parameter to bring up the configuration pane.

  4. Deselect Required.

  5. Click Save.

    Next, you'll run your Change Tier test case that you created earlier in this lab.

  6. Navigate to Test cases under the TEST & EVALUATE tab.

  7. Select your Change Tier test case.

  8. Click on the Run selected button and choose environment Draft.

  9. Did the test case show a status of Fail?

    Test Case run output

    In this scenario, your test case was expecting a valid value for the phone-number but you never actually got a phone-number from the user. Your test case proved that the new scenario will not work for your Conversational Agent.

  1. Navigate to the ChangeTier flow under Flows > Build tab and click on Get Tier page and enable Required option for the phone-number parameter and then click on Save.

  2. Rerun your Change Tier test case.

    Did it work this time?

    Excellent! You're well on your way to using logs as well as test cases to debug your conversational agent.


Congratulations!

You've created a simple agent in Conversational Agents. In the next section, we'll go over slightly more complex scenarios and build out more of the Cloudio functionality.

Completion

In this lab you may not have created any new functionality for your Conversational Agent, but you've seen what things could look like when they're not working.

Export your agent (optional)

  1. In the Agent dropdown at the top, choose View all agents.

  2. Click on the ellipsis menu (three vertical dots) on the right next to your Conversational Agent.

  3. Select Export.

    Your Conversational Agent will be downloaded to your local machine as a *.blob type file. You can use the Restore option to import it later.

Cleanup

  1. In the Cloud Console, sign out of the Google account.

  2. Close the browser tab.

End your lab

When you have completed your lab, click End Lab. Qwiklabs removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2023 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

시작하기 전에

  1. 실습에서는 정해진 기간 동안 Google Cloud 프로젝트와 리소스를 만듭니다.
  2. 실습에는 시간 제한이 있으며 일시중지 기능이 없습니다. 실습을 종료하면 처음부터 다시 시작해야 합니다.
  3. 화면 왼쪽 상단에서 실습 시작을 클릭하여 시작합니다.

시크릿 브라우징 사용

  1. 실습에 입력한 사용자 이름비밀번호를 복사합니다.
  2. 비공개 모드에서 콘솔 열기를 클릭합니다.

콘솔에 로그인

    실습 사용자 인증 정보를 사용하여
  1. 로그인합니다. 다른 사용자 인증 정보를 사용하면 오류가 발생하거나 요금이 부과될 수 있습니다.
  2. 약관에 동의하고 리소스 복구 페이지를 건너뜁니다.
  3. 실습을 완료했거나 다시 시작하려고 하는 경우가 아니면 실습 종료를 클릭하지 마세요. 이 버튼을 클릭하면 작업 내용이 지워지고 프로젝트가 삭제됩니다.

현재 이 콘텐츠를 이용할 수 없습니다

이용할 수 있게 되면 이메일로 알려드리겠습니다.

감사합니다

이용할 수 있게 되면 이메일로 알려드리겠습니다.

한 번에 실습 1개만 가능

모든 기존 실습을 종료하고 이 실습을 시작할지 확인하세요.

시크릿 브라우징을 사용하여 실습 실행하기

이 실습을 실행하려면 시크릿 모드 또는 시크릿 브라우저 창을 사용하세요. 개인 계정과 학생 계정 간의 충돌로 개인 계정에 추가 요금이 발생하는 일을 방지해 줍니다.