Oracle Cloud Infrastructure Generative AI Professional (1Z0-1127-24) Test: Practice Exam

 Test: Practice Exam: Oracle Cloud Infrastructure Generative AI Professional

1. Which LangChain component is responsible for generating the linguistic output in a chatbot system?
LLMs 

2. How does the structure of vector databases differ from traditional relational databases?
Ans: It is based on distances and similarities in a vector space. 

3. How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?
Ans: Groundedness pertains to factual correctness, whereas Answer Relevance concerns query relevance. 

4. What does the RAG Sequence model do in the context of generating a response?
Ans: For each input query, it retrieves a set of relevant documents and considers them together to generate a cohesive response. 

5. How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?
Ans: Increasing the temperature flattens the distribution, allowing for more varied word choices. 

6. Given the following code block: history = StreamlitChatMessageHistory(key="chat_messages")
memory = ConversationBufferMemory(chat_memory=history)

Which statement is NOT true about StreamlitChatMessageHistory?
Ans: StreamlitChatMessageHistory can be used in any type of LLM application. 

7. Which statement is true about string prompt templates and their capability regarding variables?
Ans: They support any number of variables, including the possibility of having none. 

8. What is the purpose of Retrievers in LangChain?
Ans: To retrieve relevant information from knowledge bases 

9. Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?
Ans: Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs. 

10. What does a cosine distance of 0 indicate about the relationship between two embeddings?
Ans: They are similar in direction 

11. What does the Loss metric indicate about a model's predictions?
Ans: Loss is a measure that indicates how wrong the model's predictions are.

12. What is the purpose of Retrieval Augmented Generation (RAG) in text generation?
Ans: To generate text using extra information obtained from an external data source 

13. In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?
Ans: Choosing the word with the highest probability at each step of decoding 

14. Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why are they crucial for language models?
Ans: Semantic relationships; crucial for understanding context and generating precise language 

15. In which scenario is soft prompting appropriate compared to other training styles?
Ans: When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training 

16. What is LangChain?
Ans: A Python library for building applications with Large Language Models 

17. Why is it challenging to apply diffusion models to text generation?
Ans: Because text representation is categorical unlike images 

18. When does a chain typically interact with memory in a run within the LangChain framework?
Ans: After user input but before chain execution, and again after core logic but before output 

19. When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?
Ans: When the LLM does not perform well on a task and the data for prompt engineering is too large 

20. In the simplified workflow for managing and querying vector data, what is the role of indexing?
Ans: To map vectors to a data structure for faster searching, enabling efficient retrieval 

21. Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?
Ans: It selectively updates only a fraction of the model's weights. 

22. What does accuracy measure in the context of fine-tuning results for a generative model?
Ans: How many predictions the model made correctly out of all the predictions in an evaluation 

23. What do prompt templates use for templating in language model applications?
Ans: Python's str.format syntax 

24. How does a presence penalty function in language model generation?
Ans: It penalizes a token each time it appears after the first occurrence. 

25. How are documents usually evaluated in the simplest form of keyword-based search?
Ans: Based on the presence and frequency of the user-provided keywords