Home Module 1: Foundations

Module 1: Foundations
Demystifying AI

Understand what an LLM is and how it reasons. The prompt as a navigation instruction through the model's knowledge. No magic, just statistical mathematics.

Estimated time: 3-4 hours
4 Practical Exercises
Beginner Level
AI Act Ready

Course Progress

๐ŸŽฏ Module Objective

By the end of this module, you will understand how Large Language Models work, be able to deconstruct prompts into their fundamental components, and effectively control AI responses through temperature and model settings.

You will learn:
  • The statistical mathematics behind LLMs
  • Prompt anatomy and component identification
  • Comparative analysis of different AI models
  • Temperature and parameter effects on AI responses
  • Best practices for clear prompt structure

What is an LLM? Not Magic, but Statistical Mathematics

A Large Language Model (LLM) is a sophisticated statistical predictor based on the Transformer architecture. Given a sequence of tokens (the context), it calculates the probability distribution of the most plausible next token through self-attention mechanisms.

๐Ÿ’ก Technical Deep Dive Modern LLMs use the Transformer architecture with billions of parameters trained on trillions of tokens. "Knowledge" is stored as statistical patterns in the model's weights, not as a traditional database.
Exercise 1.1: Comparative Model Analysis Beginner

Test the same prompt on different models:

Prompt: "Explain the flat-rate tax system in two lines for a digital freelancer."

Task:

  1. Execute the prompt on at least two different models (e.g., ChatGPT, Claude, Gemini)
  2. Record the responses and compare them
  3. Identify at least 3 substantial differences between the explanations
  4. Explain why you think these differences exist
Reference Solution

Typical observations:

  • GPT-4 tends to be more structured and formal
  • Claude often provides practical examples
  • Gemini may be more concise but less detailed

Why they differ: Each model has different training data, specific optimizations, and "personalities" imposed by reinforcement learning.

Exercise 1.2: LLM Settings and Their Effect Intermediate

Test scenarios:

  1. Create a prompt to generate a short fantasy story (max 100 words)
  2. Execute the same prompt with different temperatures: 0.2, 0.7, 1.0
  3. Document how creativity and coherence change
  4. Test the same prompt with top_p: 0.5 and 0.9
  5. Compare results and identify the best setup for creative but coherent stories
Results Analysis

Expected results:

  • Temperature 0.2: Very similar stories, little variety
  • Temperature 0.7: Good balance between creativity and coherence
  • Temperature 1.0: Very creative stories but sometimes incoherent
  • Top_p 0.5: More focused and predictable responses
  • Top_p 0.9: Greater variety in responses

Recommended setup: temperature=0.7, top_p=0.8 for creative but coherent stories.

Exercise 1.3: Anatomy of a Prompt Beginner

Task: Decompose this prompt into its 4 fundamental components:

As a digital marketing expert, analyze this landing page for a fitness app.
Identify 3 strengths and 3 areas for improvement.
Provide concrete recommendations to increase conversions.

Landing page: [Landing page text here]

Response format:
1. Strengths
2. Areas for improvement  
3. Recommendations
4. Implementation priority (high/medium/low)

Identify: Instruction, Context, Input Data, Output Indicator.

Component Analysis
  • Instruction: "analyze this landing page", "identify 3 strengths and 3 areas for improvement", "provide recommendations"
  • Context: "As a digital marketing expert", "for a fitness app", "to increase conversions"
  • Input Data: "Landing page: [Landing page text here]"
  • Output Indicator: "Response format:" followed by the specified structure
๐Ÿ› ๏ธ Best Practice Use clear separators like "###" or "---" between different prompt sections. This helps the model distinguish instructions from data and context.

Ready for the next step?

Module 1: 100% complete Overall progress: 20%