ANTHROP\C Al Fluency: Key Terminology Cheat Sheet Core Al Fluency Framework Terms Al Fluency The ability to work with AI systems in ways that are effective, efficient, ethical, and safe.
Mastered
0 (0%)
Reviewing
2 (4%)
Learning
2 (4%)
New
49 (96%)
Why is "Problem Awareness" essential before delegating tasks to AI?
Compare and contrast the "Automation" and "Augmentation" interaction modes.
What does the "Description" competency entail when working with an AI model?
True or false: "Platform Awareness" only matters when selecting the cheapest AI tool.
How would you apply "Product Discernment" to evaluate a generated report?
What is a common mistake when performing "Task Delegation"?
Why does "Transparency Diligence" matter in AI‑assisted work?
Which of the 4Ds involves evaluating how the AI arrived at its output?
What could go wrong if you neglect "Deployment Diligence" after receiving AI output?
Edge case: An AI system generates a plausible but fabricated citation. Which competency should catch this, and how?
How does "Creation Diligence" differ from "Performance Diligence"?
What's wrong with this reasoning? "Since AI can generate text quickly, I should let it write all my reports without reviewing them."
What distinguishes the Automation interaction mode from Augmentation?
Define the Agency interaction mode in human‑AI systems.
Why might a user prefer Agency over Automation for a recurring task?
Compare the level of human control in Automation versus Agency.
True or false: In Augmentation, the AI can make final decisions without human input.
What is a common misconception about Automation regarding AI creativity?
Edge case: If an AI configured for Agency receives ambiguous instructions, what should happen?
When would Augmentation be the preferred mode for medical diagnosis support?
How does the output sharing differ between Automation and Agency?
What's wrong with this reasoning? "If an AI can perform a task automatically, there is no need for Augmentation."
Explain a scenario where shifting from Automation to Agency could introduce risk.
What core concept unites Automation, Augmentation, and Agency?
Why does the transformer architecture enable LLMs to process long text sequences more efficiently than earlier recurrent models?
What is a "parameter" in an LLM and why does increasing the number of parameters generally improve performance?
Compare and contrast pre‑training and fine‑tuning in the lifecycle of a generative AI model.
True or false: Scaling laws guarantee that any increase in data, compute, or parameters will always improve an AI model's capabilities.
What is a "hallucination" in LLM output, and why does it occur even when the model is confident?
How does the temperature setting affect the trade‑off between creativity and reliability in generated text?
When would you use Retrieval‑Augmented Generation (RAG) instead of a plain LLM, and what problem does it solve?
Define "context window" and explain why exceeding its limit leads to truncated or ignored information.
Why might a model fine‑tuned for instruction following still produce harmful content, and how can this be mitigated?
Edge case: If a model's knowledge cutoff is 2023 but you ask about a 2024 event, what is the likely response and why?
How do neural networks differ from biological brains, and why is this distinction important when interpreting AI behavior?
Why does a higher temperature setting in a generative model tend to produce more varied and creative outputs????
How does lowering the temperature affect the predictability of AI responses?
Explain the core idea behind Retrieval‑augmented generation (RAG).
What is a common misconception about RAG’s ability to eliminate hallucinations?
Define bias in the context of AI outputs.
Compare and contrast a "prompt" and "prompt engineering".
Why does chain‑of‑thought prompting often improve performance on reasoning tasks?
What is the difference between few‑shot and zero‑shot prompting?
How would you use a role or persona definition to improve a technical explanation?
Give an example of an output constraint you might add to a prompt requesting a summary.
What is the "think‑first" approach and when is it most useful?
True or false: Adding more examples in few‑shot prompting always improves model performance.
What's wrong with this reasoning? "Because the model generated a factually correct sentence, it must have accessed the correct source document via RAG."
Edge case: If you set temperature to 0 but also use a very ambiguous instruction, what kind of output can you expect?
When would you prefer a higher temperature setting over a lower one in a creative writing task?
What's wrong with this reasoning? "Because an LLM has billions of parameters, it must understand the meaning of every word it generates."