Exams > Microsoft > AI-102: Designing and Implementing a Microsoft Azure AI Solution
AI-102: Designing and Implementing a Microsoft Azure AI Solution
Page 1 out of 15 pages Questions 1-10 out of 147 questions
Question#1

DRAG DROP -
You have 100 chatbots that each has its own Language Understanding model.
Frequently, you must add the same phrases to each model.
You need to programmatically update the Language Understanding models to include the new phrases.
How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all.
You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:

Discover Answer Hide Answer

Answer:
Box 1: AddPhraseListAsync -

Example: Add phraselist feature -
var phraselistId = await client.Features.AddPhraseListAsync(appId, versionId, new PhraselistCreateObject
{
EnabledForAllModels = false,
IsExchangeable = true,
Name = "QuantityPhraselist",
Phrases = "few,more,extra"
});

Box 2: PhraselistCreateObject -
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/client-libraries-rest-api

Question#2

DRAG DROP -
You plan to use a Language Understanding application named app1 that is deployed to a container.
App1 was developed by using a Language Understanding authoring resource named lu1.
App1 has the versions shown in the following table.

You need to create a container that uses the latest deployable version of app1.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:

Discover Answer Hide Answer

Answer:
Step 1: Export the model using the Export for containers (GZIP) option.
Export versioned app's package from LUIS portal
The versioned app's package is available from the Versions list page.
1. Sign on to the LUIS portal.
2. Select the app in the list.
3. Select Manage in the app's navigation bar.
4. Select Versions in the left navigation bar.
5. Select the checkbox to the left of the version name in the list.
6. Select the Export item from the contextual toolbar above the list.
7. Select Export for container (GZIP).
8. The package is downloaded from the browser.

Step 2: Select v1.1 of app1.
A trained or published app packaged as a mounted input to the container with its associated App ID.
Step 3: Run a contain and mount the model file.
Run the container, with the required input mount and billing settings.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-container-howto

Question#3

You need to build a chatbot that meets the following requirements:
✑ Supports chit-chat, knowledge base, and multilingual models
✑ Performs sentiment analysis on user messages
✑ Selects the best language model automatically
What should you integrate into the chatbot?

  • A. QnA Maker, Language Understanding, and Dispatch
  • B. Translator, Speech, and Dispatch
  • C. Language Understanding, Text Analytics, and QnA Maker
  • D. Text Analytics, Translator, and Dispatch
Discover Answer Hide Answer

Answer: C
Language Understanding: An AI service that allows users to interact with your applications, bots, and IoT devices by using natural language.
QnA Maker is a cloud-based Natural Language Processing (NLP) service that allows you to create a natural conversational layer over your data. It is used to find the most appropriate answer for any input from your custom knowledge base (KB) of information.
Text Analytics: Mine insights in unstructured text using natural language processing (NLP)ג€"no machine learning expertise required. Gain a deeper understanding of customer opinions with sentiment analysis. The Language Detection feature of the Azure Text Analytics REST API evaluates text input
Incorrect Answers:
A, B, D: Dispatch uses sample utterances for each of your bot's different tasks (LUIS, QnA Maker, or custom), and builds a model that can be used to properly route your user's request to the right task, even across multiple bots.
Reference:
https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/ https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/overview

Question#4

Your company wants to reduce how long it takes for employees to log receipts in expense reports. All the receipts are in English.
You need to extract top-level information from the receipts, such as the vendor and the transaction total. The solution must minimize development effort.
Which Azure service should you use?

  • A. Custom Vision
  • B. Personalizer
  • C. Form Recognizer
  • D. Computer Vision
Discover Answer Hide Answer

Answer: C
Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documentsג€"the service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more.
Form Recognizer is composed of custom document processing models, prebuilt models for invoices, receipts, IDs and business cards, and the layout model.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer

Question#5

You have a factory that produces food products.
You need to build a monitoring solution for staff compliance with personal protective equipment (PPE) requirements. The solution must meet the following requirements:
* Identify staff who have removed masks or safety glasses.
* Perform a compliance check every 15 minutes.
* Minimize development effort.
* Minimize costs.
Which service should you use?

  • A. Face
  • B. Computer Vision
  • C. Azure Video Analyzer for Media (formerly Video Indexer)
Discover Answer Hide Answer

Answer: A
Face API is an AI service that analyzes faces in images.
Embed facial recognition into your apps for a seamless and highly secured user experience. No machine-learning expertise is required. Features include face detection that perceives facial features and attributes ג€" such as a face mask, glasses, or face location ג€" in an image, and identification of a person by a match to your private repository or via photo ID.
Reference:
https://azure.microsoft.com/en-us/services/cognitive-services/face/

Question#6

You have an Azure Cognitive Search solution and a collection of blog posts that include a category field.
You need to index the posts. The solution must meet the following requirements:
* Include the category field in the search results.
* Ensure that users can search for words in the category field.
* Ensure that users can perform drill down filtering based on category.
Which index attributes should you configure for the category field?

  • A. searchable, sortable, and retrievable
  • B. searchable, facetable, and retrievable
  • C. retrievable, filterable, and sortable
  • D. retrievable, facetable, and key
Discover Answer Hide Answer

Answer: C
Fields have data types and attributes. The check boxes across the top are index attributes controlling how the field is used.
* Retrievable means that it shows up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox, for example for fields used only in filter expressions.
* Filterable, Sortable, and Facetable determine whether fields are used in a filter, sort, or faceted navigation structure.
* Searchable means that a field is included in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
Reference:
https://docs.microsoft.com/en-us/azure/search/search-get-started-portal

Question#7

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You plan to build an API that will identify whether an image includes a Microsoft Surface Pro or Surface Studio.
You need to deploy a service in Azure Cognitive Services for the API. The service must be named AAA12345678 and must be in the East US Azure region. The solution must use the Free pricing tier.
To complete this task, sign in to the Azure portal.

Discover Answer Hide Answer

Answer: See explanation below.
Step 1: In the Azure dashboard, click Create a resource.
Step 2: In the search bar, type "Cognitive Services."
You'll get information about the cognitive services resource and a legal notice. Click Create.
Step 3: You'll need to specify the following details about the cognitive service (refer to the image below for a completed example of this page):
Subscription: choose your paid or trial subscription, depending on how you created your Azure account.
Resource group: click create new to create a new resource group or choose an existing one.
Region: choose the Azure region for your cognitive service. Choose: East US Azure region.
Name: choose a name for your cognitive service. Enter: AAA12345678
Pricing Tier: Select: Free pricing tier

Step 4: Review and create the resource, and wait for deployment to complete. Then go to the deployed resource.
Note: The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.

Tag visual features -
Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
Try out the image tagging features quickly and easily in your browser using Vision Studio.
Reference:
https://docs.microsoft.com/en-us/learn/modules/analyze-images-computer-vision/3-analyze-images https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-image-analysis

Question#8

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You need to build an API that uses the service in Azure Cognitive Services named AAA12345678 to identify whether an image includes a Microsoft Surface Pro or
Surface Studio.
To achieve this goal, you must use the sample images in the C:\Resources\Images folder.
To complete this task, sign in to the Azure portal.

Discover Answer Hide Answer

Answer: See explanation below.
Step 1: In the Azure dashboard, click Create a resource.
Step 2: In the search bar, type "Cognitive Services."
You'll get information about the cognitive services resource and a legal notice. Click Create.
Step 3: You'll need to specify the following details about the cognitive service (refer to the image below for a completed example of this page):
Subscription: choose your paid or trial subscription, depending on how you created your Azure account.
Resource group: click create new to create a new resource group or choose an existing one.
Region: choose the Azure region for your cognitive service. Choose: East US Azure region.
Name: choose a name for your cognitive service. Enter: AAA12345678
Pricing Tier: Select: Free pricing tier
Step 4: Review and create the resource, and wait for deployment to complete. Then go to the deployed resource.
Note: The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.

Tag visual features -
Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
Try out the image tagging features quickly and easily in your browser using Vision Studio.
Reference:
https://docs.microsoft.com/en-us/learn/modules/analyze-images-computer-vision/3-analyze-images https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-image-analysis

Question#9

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You need to get insights from a video file located in the C:\Resources\Video\Media.mp4 folder.
Save the insights to the C:\Resources\Video\Insights.json folder.
To complete this task, sign in to the Azure Video Analyzer for Media at https://www.videoindexer.ai/ by using [email protected]

Discover Answer Hide Answer

Answer: See explanation below.
Step 1: Login -
Browse to the Azure Video Indexer website and sign in.
URL: https://www.videoindexer.ai/

Login [email protected] -
Step 2: Create a project from your video
You can create a new project directly from a video in your account.
1. Go to the Library tab of the Azure Video Indexer website.
2. Open the video that you want to use to create your project. On the insights and timeline page, select the Video editor button.
Folder: C:\Resources\Video\Media.mp4
This takes you to the same page that you used to create a new project. Unlike the new project, you see the timestamped insights segments of the video, that you had started editing previously.
Step 3: Save the insights to the C:\Resources\Video\Insights.json folder.
Reference:
https://docs.microsoft.com/en-us/azure/azure-video-indexer/use-editor-create-project

Question#10

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You plan to analyze stock photography and automatically generate captions for the images.
You need to create a service in Azure to analyze the images. The service must be named caption12345678 and must be in the East US Azure region. The solution must use the Free pricing tier.
In the C:\Resources\Caption\Params.json folder, enter the value for Key 1 and the endpoint for the new service.
To complete this task, sign in to the Azure portal.

Discover Answer Hide Answer

Answer: ן¼‹See explanation below.
Step 1: Provision a Cognitive Services resource
If you don't already have one in your subscription, you'll need to provision a Cognitive Services resource.
1. Open the Azure portal at https://portal.azure.com, and sign in using the Microsoft account associated with your Azure subscription.
2. Select the Create a resource button, search for cognitive services, and create a Cognitive Services resource with the following settings:
Subscription: Your Azure subscription
Resource group: Choose or create a resource group (if you are using a restricted subscription, you may not have permission to create a new resource group - use the one provided)

Region: East US Azure region -

Name: caption12345678 -

Pricing tier: Free F0 -
3. Select the required checkboxes and create the resource.
Wait for deployment to complete, and then view the deployment details.
4. When the resource has been deployed, go to it and view its Keys and Endpoint page. You will need the endpoint and one of the keys from this page in the next procedure.
Step 2: Save Key and Endpoint values in Params.json
Open the configuration file, C:\Resources\Caption\Params.json. and update the configuration values it contains to reflect the endpoint and an authentication key for your cognitive services resource. Save your changes.
Reference:
https://microsoftlearning.github.io/AI-102-AIEngineer/Instructions/15-computer-vision.html

chevron rightPrevious Nextchevron right