Oracle Cloud Infrastructure 2024 Generative AI Professional (1Z0-1127-24) Test

 OCI Generative AI Professional

Test: Skill Check: Fundamentals of Large Language Models
1. What does in-context learning in Large Language Models involve?
Ans: Conditioning the model with task-specific instructions or demonstrations 

2. What is prompt engineering in the context of Large Language Models (LLMs)?
Ans: Iteratively refining the ask to elicit a desired response 

3. What is the role of temperature in the decoding process of a Large Language Model (LLM)?
Ans: To adjust the sharpness of probability distribution over vocabulary when selecting the next word 

4. What does the term "hallucination" refer to in the context of Language Large Models (LLMs)?
Ans: The phenomenon where the model generates factually incorrect information or unrelated content as if it were true 

5. Which statement accurately reflects the differences between these approaches in terms of the number of parameters modified and the type of data used?
Ans:Fine-tuning modifies all parameters using labeled, task-specific data, whereas Parameter Efficient Fine-Tuning updates a few, new parameters also with labeled, task-specific data. 

Test: Skill Check: OCI Generative AI Service Deep Dive

1. What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?
Ans: It provides examples in the prompt to guide the LLM to better performance with no training cost. 

2. Which is a distinctive feature of GPUs in Dedicated AI Clusters used for generative AI tasks?
Ans: The GPUs allocated for a customer’s generative AI tasks are isolated from other GPUs.

3. What is the purpose of embeddings in natural language processing?
To create numerical representations of text that capture the meaning and relationships between words or phrases 

4. What is the purpose of frequency penalties in language model outputs?
Ans: To penalize tokens that have already appeared, based on the number of times they have been used 

5. What happens if a period (.) is used as a stop sequence in text generation?
Ans: The model stops generating text after it reaches the end of the first sentence, even if the token limit is much higher. 

Test: Skill Check: Building Blocks for an LLM Application

1. Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented Generation (RAG)?
Ans: They rely on internal knowledge learned during pretraining on a large text corpus. 

2. What differentiates Semantic search from traditional keyword search?
Ans: It involves understanding the intent and context of the search. 

3. What do embeddings in Large Language Models (LLMs) represent?
Ans: The semantic content of data in high-dimensional vectors 

4. What is the function of the Generator in a text generation system?
Ans: To generate human-like text using the information retrieved and ranked, along with the user's original query 

5. What does the Ranker do in a text generation system?
Ans: It evaluates and prioritizes the information retrieved by the Retriever. 

Test: Skill Check: Build an LLM Application using OCI Generative AI Service

1. How are chains traditionally created in LangChain?
Ans: Using Python classes, such as LLM Chain and others 

2. What is the purpose of memory in the LangChain framework?
Ans: To store various types of data and provide algorithms for summarizing past interactions 

3. What is LCEL in the context of LangChain Chains?
Ans: A declarative way to compose chains together using LangChain Expression Language 

4. What is the function of "Prompts" in the chatbot system?
Ans: They are used to initiate and guide the chatbot's responses. 

5. How are prompt templates typically designed for language models?
Ans: As predefined recipes that guide the generation of language model prompts