This technique allows arbitrary functions to be wrapped without knowing their signature.
When to Use *args and **kwargs?
Use Case
*args
**kwargs
Unknown number of positional arguments
Yes
No
Unknown number of keyword arguments
No
Yes
Forwarding arguments in decorators/wrappers
Yes
Yes
Passing configuration options
No
Yes
Building flexible API helpers
Yes
Yes
Common Mistakes
Wrong order in function signature:
def wrong(**kwargs, *args): # ❌ invalid
Using args or kwargs without*:
def f(args): # ❌ not the same as *args
pass
Forgetting to unpack:
def f(a, b):
return a + b
nums = (1, 2)
# f(nums) ❌ TypeError
f(*nums) # ✔ correct
Summary
Concept
Description
*args
Captures extra positional arguments into a tuple.
**kwargs
Captures extra keyword arguments into a dictionary.
Unpacking
* unpacks lists/tuples, ** unpacks dictionaries.
Decorator usage
Forward arguments to wrapped functions easily.
Best practice
a, *args, sep="-", **kwargs follows correct ordering.
Python match Statement (Structural Pattern Matching)
Introduction
The match statement was introduced in Python 3.10.
Patterns can match:
values
types
structures (lists, dicts, tuples)
classes
guards (conditions)
Basic Value Matching
def check_status(code):
match code:
case 200:
return "OK"
case 404:
return "Not Found"
case 500:
return "Server Error"
case _:
return "Unknown"
_ is the wildcard catch-all.
Match Multiple Values in One Case
match command:
case "start" | "run":
print("Starting...")
case "stop" | "quit":
print("Stopping...")
case _:
print("Unknown command")
Use | (“OR”) for multi-pattern matching.
Capturing Values
With match, Python can look at a value (often coming from user input or a function)
and check whether it fits a certain “shape”.
While doing that, it can also capture parts of the value into variables.
# Example commands your program might receive
user_input = ("add", 10, 20)
# user_input = ("echo", "Hello")
# user_input = ("quit",)
match user_input:
case ("add", x, y):
# The tuple matches ("add", something, something)
# The two values are captured as x and y
print(x + y)
case ("echo", message):
# Matches a tuple with 2 elements: ("echo", some_text)
# 'message' captures the second element
print(message)
case ("quit",):
# Matches a single-element tuple
print("Goodbye!")
case _:
# Anything else that doesn't fit the patterns
print("Unknown command")
Capturing lets you extract needed values directly inside the pattern.
No manual unpacking or indexing is needed.
Sequence Pattern Matching
match data:
case [x, y]:
print(f"Two elements: {x}, {y}")
case [x, y, z]:
print(f"Three elements: {x}, {y}, {z}")
case [first, *rest]:
print("First element:", first)
print("Remaining:", rest)
*rest works like in argument unpacking.
Matching Dictionaries
match config:
case {"mode": "debug", "level": lvl}:
print("Debug level:", lvl)
case {"mode": "production"}:
print("Running in prod mode")
case _:
print("Unknown config")
Keys must match exactly; extra keys are allowed unless restricted.
Matching Classes (Object Patterns)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def describe(p):
match p:
case Point(x=0, y=0):
return "Origin"
case Point(x, y):
return f"Point({x}, {y})"
You can match attributes by name.
Using Guards (if conditions)
match value:
case x if x > 0:
print("Positive")
case x if x < 0:
print("Negative")
case 0:
print("Zero")
Matching Enums
from enum import Enum
class State(Enum):
READY = 1
RUNNING = 2
STOPPED = 3
match state:
case State.READY:
print("Ready")
case State.RUNNING:
print("Running")
case State.STOPPED:
print("Stopped")
Combining Patterns
match event:
case {"type": "click", "pos": (x, y)}:
print("Clicked:", x, y)
case {"type": "keypress", "key": k} if k.isalpha():
print("Pressed a letter:", k)
case {"type": "keypress", "key": k}:
print("Pressed:", k)
case _:
print("Unknown event")
Common Mistakes
Using = instead of == inside patterns
case x = 10 # ❌ invalid
case 10 # ✔ correct
Misunderstanding variable binding—patterns like case x assign to x, they don’t compare.
match value:
case x: # ALWAYS matches, captures value into x
print("Matched:", x)
To compare against variable x, prefix with .:
x = 10
match value:
case .x: # match literal value of x
print("value is 10")
Summary Table
Feature
Description
Value patterns
Compare exact literals (200, "ok", etc.)
OR-patterns
"a" | "b" matches either
Sequence patterns
Match lists/tuples, use *rest
Mapping patterns
Match dictionaries by key
Class patterns
Match objects by attribute
Guards
case x if condition
Wildcard
_ matches anything
Variable capture
Patterns bind matched values to variables
Python Numbers
Introduction
Python provides several built-in numeric types to represent and manipulate numbers.
The three most common numeric types are:
int: integers of unlimited size
float: double-precision floating-point numbers
complex: numbers with real and imaginary parts
Python also provides many built-in arithmetic operators, numeric functions, and modules like math and decimal.
Integers (int)
Integers represent whole numbers and have arbitrary precision in Python.
a = 123
b = -42
c = 10_000_000 # underscores allowed for readability
print(a, b, c)
Python automatically grows integer size when needed:
a = 10
b = 3
print(a + b) # addition
print(a - b) # subtraction
print(a * b) # multiplication
print(a / b) # float division
print(a // b) # integer division
print(a % b) # remainder
print(a ** b) # exponentiation
A Protocol in Python (introduced in PEP 544) is a way to define structural typing (also called duck typing) for static type checkers such as mypy, pyright, and ruff.
Structural typing means:
"If an object has the required methods/attributes, it is accepted — regardless of its class."
This is different from nominal typing where the type depends on inheritance.
Protocols allow you to define interfaces without forcing inheritance.
Basic Protocol Example
from typing import Protocol
class Greeter(Protocol):
def greet(self) -> str:
...
Any class with a greet() method returning a str matches this protocol:
class Person:
def greet(self) -> str:
return "Hello!"
class Robot:
def greet(self) -> str:
return "Beep bop"
Both classes satisfy Greeter.
def welcome(g: Greeter) -> None:
print(g.greet())
welcome(Person()) # OK
welcome(Robot()) # OK
No Need to Inherit
The following also works, because Protocols check structure, not inheritance:
class Cat:
def greet(self) -> str:
return "Meow"
welcome(Cat()) # Also OK
Cat does NOT inherit from Greeter, but matches the required structure.
Protocols with Attributes
class User(Protocol):
name: str
age: int
def print_user(u: User) -> None:
print(u.name, u.age)
Any object with name and age attributes matches:
class Person:
def __init__(self, name: str, age: int) -> None:
self.name = name
self.age = age
print_user(Person("Hwangfu", 23))
TypedDict (introduced in PEP 589) allows you to define dictionaries with a fixed structure — specific keys and specific value types.
It is extremely useful when you want dict-like objects but with the type safety of data classes.
Unlike dataclasses, TypedDict remains a regular dict at runtime and does not enforce types — checkers like mypy, pyright, and ruff enforce it statically.
Basic Usage
from typing import TypedDict
class User(TypedDict):
id: int
name: str
email: str
names = ["Anna", "Ben", "Carl"]
for i in range(len(names)):
print(i, names[i])
Reversed loops
for i in range(10, -1, -1):
print(i)
Generating arithmetic sequences
evens = list(range(0, 21, 2))
Comparison of List vs Range
Operation
list(range)
range
Memory usage
Large (stores all values)
Tiny (three integers only)
Speed of iteration
Fast
Fast
Supports slicing
Yes
Yes
Supports membership test
Linear search
Constant-time math
Summary
Feature
Description
Lazy
No list stored; values computed when needed
Efficient
Constant memory usage
Exclusive stop
Sequence stops before the stop value
Step
Can count upward or downward
Supports slicing
Slicing returns a new range
Constant-time membership
Efficient mathematical check
Python Lambda Expressions
Introduction
A lambda expression in Python creates a small, anonymous function.
Lambdas are useful when you need a short function for a brief moment and do not want to define a full def block.
However, they are intentionally limited: a lambda can contain only one expression (no statements).
Basic Syntax
lambda arguments: expression
Example:
add = lambda a, b: a + b
print(add(3, 4)) # 7
This is equivalent to:
def add(a, b):
return a + b
Single Expression Only
The body must be a single expression — not statements, loops, assignments, or annotations.
lambda x: x * 2 # OK
lambda x: print(x) # OK (print is an expression here)
lambda x: y = x + 1 # ❌ error (assignment is not allowed)
lambda x: for i in ... # ❌ cannot contain loops
Zero-Argument Lambdas
noop = lambda: None
noop()
Lambdas with Default Values
f = lambda x=10, y=20: x + y
print(f()) # 30
print(f(5)) # 25
Lambdas capture variables from the surrounding scope.
def make_add(n):
return lambda x: x + n
add5 = make_add(5)
print(add5(10)) # 15
Lambdas vs Named Functions
Feature
lambda
def function
Has a name?
No
Yes
Multiple statements?
No
Yes
Main usage
Short, throwaway functions
Larger or reusable logic
Debug readability
Poor
Good
Syntax
Single expression
Block with return
Summary
Concept
Description
Anonymous
lambda defines short unnamed functions
Single expression
No blocks, no loops, no assignments
Common uses
sorted, map, filter, callbacks
Readability
Good for small tasks, bad for complex logic
Alternatives
Use def for named, clearer functions
Python Documentation Strings (Docstrings)
Introduction
A docstring in Python is a string literal placed at the beginning of a module, class, method, or function.
Docstrings provide built-in documentation and can be retrieved programmatically using help(), .__doc__, and tooling such as IDEs and Sphinx.
Docstrings are intended for humans, but are stored and accessible at runtime.
The standard format is a triple-quoted string:
"""This is a docstring."""
Where Docstrings Can Be Placed
Module-level
Class-level
Function / Method-level
"""
This module handles user authentication logic.
"""
def login(user, password):
"""Authenticate a user by password."""
...
Accessing Docstrings
print(login.__doc__)
help(login)
help() uses docstrings to display formatted documentation.
One-Line Docstrings
Used for simple, self-explanatory functions.
def add(a, b):
"""Return the sum of a and b."""
return a + b
Multi-Line Docstrings
Follow PEP 257 conventions:
First line: short summary
Blank line
Detailed explanation (optional)
def connect(url, timeout=10):
"""
Connect to a remote server.
This function establishes a TCP connection to the given URL.
The connection will fail if the timeout is exceeded.
"""
...
Docstrings for Classes
class User:
"""
Represents a system user with username and email.
"""
def __init__(self, username, email):
"""Initialize a new User object."""
self.username = username
self.email = email
Docstrings for Methods
class Circle:
"""Circle with radius and area calculation."""
def area(self):
"""Return the area of the circle."""
import math
return math.pi * self.r * self.r
Documenting Parameters
Python has no official format, but common standards are:
Google style
NumPy style
reStructuredText / Sphinx style
Google Style Example
def add(a, b):
"""
Add two numbers.
Args:
a (int): First number.
b (int): Second number.
Returns:
int: Sum of a and b.
"""
return a + b
NumPy Style
def scale(values, factor):
"""
Scale an array by a factor.
Parameters
----------
values : list[int]
Sequence of numbers.
factor : int or float
Multiplier.
Returns
-------
list[int]
Scaled numbers.
"""
return [v * factor for v in values]
def compute(x):
"""
Compute something useful.
Returns:
float: The computed value.
"""
return x * 2.5
Documenting Exceptions
def read_file(path):
"""
Read a file.
Args:
path (str): File path.
Raises:
FileNotFoundError: If the file does not exist.
"""
with open(path) as f:
return f.read()
Docstrings for Modules
"""
utility.py - helper functions for math and statistics.
"""
Docstrings for Packages
Place a docstring inside __init__.py:
"""
This package contains utilities for data analysis.
"""
Automated Tools That Use Docstrings
help() — built-in interactive documentation
IDEs (VSCode, PyCharm, etc.) show docstring hints
pydoc — documentation generator
Sphinx — advanced documentation tool
Type checkers use docstrings for descriptions but not for typing
PEP 257 Docstring Conventions
Triple quotes should be used.
First line should be a short summary.
Add a blank line before detailed descriptions.
Use consistent indentation.
Closing quotes should be on their own line if multi-line.
Summary
Aspect
Description
Definition
String literal used to document modules, classes, and functions
Supported By
help(), IDEs, Sphinx, pydoc
Styles
Google, NumPy, Sphinx/RST
Retrieval
object.__doc__ or help()
Purpose
Provide human-readable API documentation
Best Practices
Short summary + details, parameter docs, return/exception docs
Python Function Annotations
Introduction
Function annotations allow you to attach arbitrary metadata to function parameters and return values.
They were introduced in PEP 3107 and are most commonly used for type hints in modern Python.
Annotations do not enforce types at runtime. They are:
for humans
for tooling (mypy, pyright, IDEs)
accessible at runtime via .__annotations__
Basic Syntax
def add(a: int, b: int) -> int:
return a + b
The a: int and b: int parts are parameter annotations.
The -> int after the parentheses is the return annotation.
Accessing Annotations at Runtime
print(add.__annotations__)
{
'a': int,
'b': int,
'return': int
}
Annotations Are Not Enforced
Python does not check annotated types automatically.
add("hello", "world") # Works (but might not be meaningful)
Type checking is done by external tools such as mypy or pyright.
The del statement removes bindings between names and objects, or removes items from collections.
It does not directly delete objects from memory — Python deletes the object only when nothing references it anymore (via reference counting + garbage collection).
Deleting Variables (Name Bindings)
x = 10
del x
print(x) # NameError: name 'x' is not defined
del x removes the variable x from the current namespace.
person.pop("age") # returns the value
person.pop("missing", "?") # default if key missing
del person["name"] # raises KeyError if missing
person.clear() # remove all items
Shows which Python interpreter pip is attached to and its installation path.
Basic Installation Commands
# install latest version of a package
python -m pip install requests
# install a specific version
python -m pip install "requests==2.31.0"
# install at least version X
python -m pip install "requests>=2.31.0"
# install from a local file
python -m pip install ./my_package-0.1.0-py3-none-any.whl
Upgrading and Uninstalling Packages
# upgrade a package
python -m pip install --upgrade requests
# uninstall a package
python -m pip uninstall requests
--upgrade installs the newest available version.
Listing and Inspecting Installed Packages
# list all installed packages
python -m pip list
# show detailed information about a package
python -m pip show requests
pip show displays version, location, dependencies, etc.
Using Requirements Files
A requirements.txt file lists all dependencies for a project.
Typical format:
requests==2.31.0
flask>=2.3
numpy~=1.26
Install everything from a requirements file:
python -m pip install -r requirements.txt
Freeze current environment into a requirements file:
python -m pip freeze > requirements.txt
pip and Virtual Environments
Always prefer using pip inside a virtual environment to avoid polluting system Python.
import json
with open("config.json") as f:
data = json.load(f)
with open("output.json", "w") as f:
json.dump(data, f, indent=2)
Reading and Writing CSV Files
import csv
with open("data.csv") as f:
reader = csv.reader(f)
for row in reader:
print(row)
import csv
rows = [["id", "name"], [1, "Ammy"]]
with open("out.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerows(rows)
Summary
Task
Method
Open a file
open(path, mode)
Safe open
with open(...)
Read all
f.read()
Read line-by-line
Iterate file object
Write to file
f.write()
Binary mode
"rb", "wb"
Modern paths
pathlib.Path
Error handling
Use try/except
Errors and Exceptions in Python
Introduction
Errors happen when something goes wrong during program execution.
Python uses a system called exceptions to report and handle these problems.
If an exception is not handled, the program will stop and display a traceback.
Two Main Types of Errors
Syntax Errors: mistakes in the code structure
Exceptions: errors that occur during execution
# Syntax error example
if True
print("hello")
# Runtime exception example
x = 10 / 0
Common Built-in Exceptions
Exception
When It Happens
SyntaxError
Invalid Python code
NameError
Variable or name not found
TypeError
Wrong type used in an operation
ValueError
Correct type, but invalid value
ZeroDivisionError
Division by zero
IndexError
Out-of-bound list or tuple index
KeyError
Missing key in dictionary
FileNotFoundError
File does not exist
PermissionError
No permission to access resource
ImportError
Module cannot be imported
AssertionError
Assertion fails
Handling Exceptions with try/except
try:
x = 10 / 0
except ZeroDivisionError:
print("Cannot divide by zero!")
The program continues running normally after the except block.
Handling Multiple Different Exceptions
try:
n = int(input("Enter a number: "))
result = 10 / n
except ValueError:
print("You did not enter a valid integer.")
except ZeroDivisionError:
print("Cannot divide by zero.")
Catching Multiple Exceptions in One Block
try:
x = int("abc")
except (ValueError, TypeError):
print("Something is wrong with the input.")
Using else in Exception Handling
The else block runs only if no exception occurs.
try:
f = open("data.txt")
except FileNotFoundError:
print("File not found.")
else:
print("File opened successfully!")
f.close()
The finally Block
Always executes, no matter what happens.
Often used for cleanup: closing files, releasing resources, unlocking locks.
try:
f = open("data.txt")
data = f.read()
except FileNotFoundError:
print("Missing file.")
finally:
print("Closing file.")
f.close()
Raising Exceptions Yourself
raise ValueError("Invalid data format")
You can raise exceptions to signal errors in your program.
Creating Custom Exceptions
class NegativeNumberError(Exception):
pass
def check(n):
if n < 0:
raise NegativeNumberError("No negative numbers allowed!")
check(-5)
Custom exceptions inherit from Exception.
Useful in large programs, libraries, or frameworks.
Accessing Exception Details
try:
1 / 0
except Exception as e:
print(type(e))
print(e)
e contains the error message and type.
Ignoring Exceptions (Not Recommended)
try:
risky_action()
except:
pass
This silently hides errors → debugging becomes difficult.
For a bigger project or library, define a base exception and derive others from it:
class AppError(Exception):
"""Base class for all application-specific errors."""
pass
class DatabaseError(AppError):
pass
class NotFoundError(AppError):
pass
class PermissionDeniedError(AppError):
pass
This allows:
catching a specific error → DatabaseError
catching all app errors → AppError
try:
...
except NotFoundError:
print("Item not found, show 404")
except AppError:
print("Some other app-level error")
Choosing Good Exception Names
Use clear, descriptive names ending with Error:
InvalidStateError
AuthenticationError
RateLimitError
TimeoutError (already exists in stdlib)
Make it obvious what went wrong just from the class name.
Wrapping Lower-Level Exceptions
Often you want to convert low-level exceptions (e.g. OSError) into your own domain exceptions.
class StorageError(Exception):
pass
def read_user_file(path: str) -> str:
try:
with open(path) as f:
return f.read()
except OSError as e:
# wrap OS error in a domain-specific one
raise StorageError(f"Could not read {path}") from e
Using raise ... from e:
preserves the original traceback as the cause
shows both high-level and low-level errors in logs
Custom Exceptions and __str__ / __repr__
You can override __str__ to control how the error is displayed:
class ApiError(Exception):
def __init__(self, status: int, message: str):
self.status = status
self.message = message
super().__init__(message)
def __str__(self) -> str:
return f"API error {self.status}: {self.message}"
raise ApiError(404, "Resource not found")
Now print(e) shows a friendly message.
Custom Exceptions with Extra Context
You can attach more structured information, not only text:
This is useful when your caller needs to inspect error details programmatically (e.g., web API validation).
Where to Put Custom Exceptions in a Project
In small scripts: define them near where they are used.
In larger projects or libraries:
put them in a dedicated module like errors.py or exceptions.py
myapp/
__init__.py
exceptions.py # all custom exception classes
models.py
services.py
# exceptions.py
class MyAppError(Exception):
pass
class AuthError(MyAppError):
pass
# services.py
from .exceptions import AuthError
def login(user, password):
if not valid(user, password):
raise AuthError("Invalid credentials")
Best Practices
Always inherit from Exception (or a subclass), not from BaseException.
Use a base exception for your package or app:
MyLibError, MyAppError, etc.
Do not overuse custom exceptions:
Sometimes ValueError or TypeError is enough.
Create a custom class when:
you need to distinguish it from standard errors
it encodes a specific domain concept
Use raise ... from e when wrapping other exceptions.
Give meaningful messages and store raw data as attributes.
Summary
Aspect
Recommendation
Base class
Inherit from Exception (or a library-specific base error)
Naming
End with Error, be descriptive (e.g. ConfigError)
Hierarchy
Create a root AppError / MyLibError and derive others
Extra data
Store extra fields in attributes; pass a friendly message to super().__init__
Wrapping
Use raise NewError(...) from e to keep original cause
Location
Large projects: put them in exceptions.py / errors.py
Python Classes
Introduction
Classes are the foundation of object-oriented programming (OOP) in Python.
A class defines:
what data an object has (attributes)
what an object can do (methods)
A class is like a blueprint, an object (instance) is a specific realization of that blueprint.
Creating a Class
class Person:
pass
This defines a class called Person but with no attributes or methods yet.
Instantiating (Creating Objects)
p = Person()
print(p)
Calling the class creates a new instance.
The __init__ Method (Constructor)
The constructor initializes new objects.
class Person:
def __init__(self, name, age):
self.name = name # instance attribute
self.age = age
p = Person("Alice", 25)
print(p.name)
print(p.age)
self refers to the instance being created.
Attributes like self.name belong to each object separately.
Instance Attributes
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
p1 = Point(1, 2)
p2 = Point(10, 20)
print(p1.x, p2.x)
Each object has its own independent attributes.
Instance Methods
Methods are functions that belong to a class.
They always take self as the first argument.
class Circle:
def __init__(self, radius):
self.radius = radius
def area(self):
return 3.14 * self.radius * self.radius
c = Circle(5)
print(c.area())
Class Attributes
Class attributes are shared by all instances.
class Dog:
species = "Canis lupus familiaris" # class attribute
def __init__(self, name):
self.name = name # instance attribute
d1 = Dog("Buddy")
d2 = Dog("Charlie")
print(d1.species, d2.species, Dog.species)
Instance attributes override class attributes with the same name.
Methods vs Attributes (Key Concept)
Type
Stored Where?
Shared?
Instance Attributes
Inside each object
No
Class Attributes
On the class itself
Yes
Methods
On the class (as functions)
Shared by all instances
Special Methods (Dunder Methods)
Python classes can override special methods to control built-in behavior.
Some common examples:
class Person:
def __init__(self, name):
self.name = name
def __str__(self):
return f"Person({self.name})"
def __repr__(self):
return f"Person(name={self.name!r})"
p = Person("Alice")
print(p) # calls __str__
print([p]) # calls __repr__
Other useful dunders:
__len__: length() behavior
__eq__: == operator
__lt__, __gt__: ordering
__add__, __mul__: operator overloading
__enter__, __exit__: context managers
Class Methods
Defined with @classmethod.
Receive the class (cls) instead of an object (self).
Python allows a class to inherit from multiple classes.
Useful but must be used carefully.
class Flyer:
def action(self):
return "flying"
class Swimmer:
def action(self):
return "swimming"
class Duck(Flyer, Swimmer):
pass
d = Duck()
print(d.action()) # "flying" (Flyer is first)
Python resolves method calls using MRO (Method Resolution Order).
Here, Flyer is checked before Swimmer.
Method Resolution Order (MRO)
Determines which class Python looks in first when finding an attribute or method.
Use .mro() to inspect it:
print(Duck.mro())
MRO uses the C3 linearization algorithm ensuring:
consistent order
left-to-right priority
parents before grandparents
Using super() with Multiple Inheritance
Because of MRO, super() works across multiple inheritance trees.
class A:
def go(self):
print("A.go")
class B(A):
def go(self):
super().go()
print("B.go")
class C(A):
def go(self):
super().go()
print("C.go")
class D(B, C):
pass
D().go()
Output:
A.go
C.go
B.go
Order follows D.mro().
Abstract Base Classes (ABC)
Used to force child classes to implement specific methods.
from abc import ABC, abstractmethod
class Shape(ABC):
@abstractmethod
def area(self):
pass
class Square(Shape):
def __init__(self, side):
self.side = side
def area(self):
return self.side * self.side
You cannot instantiate Shape directly.
Preventing Inheritance
Python has no direct final keyword like Java.
But you can use conventions, metaclasses, or raise errors inside __init_subclass__.
class FinalClass:
def __init_subclass__(cls):
raise TypeError("This class cannot be inherited from")
Common Inheritance Patterns
"Is-a" relationship (proper use):
Dog is an Animal
Car is a Vehicle
"Has-a" relationship → use composition instead of inheritance:
Car has an Engine
Summary
Concept
Description
Inheritance
Child class extends/overrides parent class
super()
Call parent method following MRO
Multiple inheritance
Class inherits from multiple parents
MRO
Defines attribute/method lookup order
Overriding
Child redefines parent method
ABC
Define abstract methods to enforce implementation
Best practice
Use inheritance for “is-a” relationships; use composition for “has-a”
Private Variables in Python
Introduction
Python does not have true private variables like Java or C++.
However, Python uses conventions and name-mangling to protect internal attributes.
Three common visibility conventions:
public: var
protected (by convention): _var
private (name-mangled): __var
Public Attributes
Accessible from anywhere.
Default visibility in Python.
class Person:
name = "Alice"
p = Person()
print(p.name) # accessible
Protected Attributes (Single Underscore)
Convention: _var means "for internal use only".
Not enforced by Python. Developers are expected not to access it directly.
class User:
def __init__(self):
self._password = "1234" # internal
u = User()
print(u._password) # works, but not recommended
Used mainly to indicate intent, not to block access.
Private Attributes (Double Underscore)
Using __var triggers name mangling.
Python changes the internal name to prevent accidental overrides from subclasses.
import itertools
for x in itertools.count(5, 2):
print(x)
if x > 12:
break
Common functions:
count(): infinite counting
cycle(): repeat items endlessly
repeat(): repeat a value
chain(): combine iterables
islice(): slice an iterator
Iterator Exhaustion
Once an iterator is exhausted, calling next() raises StopIteration.
Must recreate a new iterator to iterate again.
lst = [1, 2, 3]
it = iter(lst)
for i in it:
print(i)
for i in it:
print(i) # nothing printed
Reiterable Objects
Lists, strings, tuples return a new iterator each time:
nums = [1, 2, 3]
for x in nums:
print(x)
for x in nums:
print(x) # works again
This makes them "reiterable".
Building Your Own Iterable + Iterator Pair
class Reverse:
def __init__(self, data):
self.data = data
def __iter__(self):
return ReverseIterator(self.data)
class ReverseIterator:
def __init__(self, data):
self.data = data
self.index = len(data)
def __next__(self):
if self.index == 0:
raise StopIteration
self.index -= 1
return self.data[self.index]
words = Reverse("abc")
for w in words:
print(w)
Summary
Concept
Description
Iterator
Object with __iter__ and __next__
Iterable
Object that produces an iterator via iter()
StopIteration
Signals end of iteration
Generators
Simplest way to create iterators
Iterator exhaustion
Iterators cannot be reused; create another
Itertools
Advanced iterator utilities
Python Generators
Introduction
A generator is a special kind of iterator that produces values lazily — one at a time — using the yield keyword.
Generators allow you to write iterable sequences without creating full containers in memory.
Generators are ideal for:
large data processing
streaming / pipelines
infinite sequences
efficient iteration
Generators automatically implement the iterator protocol:
__iter__()
__next__()
Basic Generator Function
A generator function contains one or more yield statements.
def countdown(n):
while n > 0:
yield n
n -= 1
for x in countdown(3):
print(x)
Output:
3
2
1
Each call to yield produces the next value.
Execution pauses at yield and resumes on the next next() call.
Generator Execution Model
Generators maintain their internal state between yields.
Unlike normal functions, they do not restart from the top each time.
Generators are great for streaming transformations.
def read_numbers():
for i in range(10):
yield i
def even_numbers(numbers):
for n in numbers:
if n % 2 == 0:
yield n
def squared(numbers):
for n in numbers:
yield n * n
pipeline = squared(even_numbers(read_numbers()))
for x in pipeline:
print(x)
This pipeline:
reads numbers
filters even ones
squares the result
all lazily
Sending Data into a Generator
Generators are not only one-way producers of values.
Using generator.send(value), you can send data into a paused generator.
This turns generators into coroutines — a powerful technique for cooperative multitasking, data pipelines, and stateful processing.
The first next() call starts the generator until the first yield.
The next send() call provides a value that becomes the result of the suspended yield expression.
How send() Works Internally
When a generator is paused at this line:
name = yield "Your name?"
The expression yield "Your name?" acts like:
output: "Your name?"
input: value provided later via send()
When we call g.send("Alice"):
the yield returns the value "Alice"
so name becomes "Alice"
execution continues until the next yield
The First send() Must Be None
You cannot send a non-None value into a generator before it reaches the first yield.
Therefore, the first interaction usually uses next() (or send(None)).
g = greeter()
g.send("Alice") # ERROR! Generator not started yet
Correct way:
g = greeter()
next(g) # or g.send(None)
g.send("Alice")
Using Generators as Simple Coroutines
A generator can behave like a coroutine that receives many values over time.
def accumulator():
total = 0
while True:
value = yield total
total += value
acc = accumulator()
print(next(acc)) # 0
print(acc.send(5)) # 5
print(acc.send(10)) # 15
print(acc.send(3)) # 18
This generator:
starts with a total of 0
each send() adds the value to the total
yields the updated total each time
This is a miniature coroutine.
Stopping a Coroutine with a Sentinel Value
Generators can be designed to terminate on a special input, such as None.
def summer():
total = 0
while True:
item = yield total
if item is None: # sentinel value
return total
total += item
g = summer()
next(g)
print(g.send(5)) # 5
print(g.send(7)) # 12
try:
g.send(None) # stops the generator
except StopIteration as e:
print("Final total:", e.value)
When a generator returns, the return value is contained inside a StopIteration exception.
This is how Python’s async framework originally worked before async/await.
Sending Exceptions into a Generator
You can use throw() to inject an exception inside a paused generator.
A generator may use return, but this signals the end of iteration.
The returned value is stored in the StopIteration exception.
def total():
s = 0
while True:
n = yield
if n is None:
break
s += n
return s
Infinite Generators
Generators can represent unbounded sequences.
def naturals():
n = 0
while True:
yield n
n += 1
Generators vs Regular Functions
Regular Function
Generator
Returns once
Yields many times
Does not preserve state
Resumes from last yield
Eager evaluation
Lazy evaluation
Returns ordinary values
Returns an iterator
Summary
Feature
Description
yield
Produces values lazily
Stateful execution
Generator pauses and resumes
Memory efficient
Only one value stored at a time
Generator expressions
Lightweight inline generators
send() / throw()
Two-way communication with generator
Pipelines
Ideal for streaming data processing
Brief Tour of the Python Standard Library
Introduction
The Python Standard Library is a powerful collection of modules that come bundled with Python, no installation required.
It includes tools for:
system interaction
file handling
data formats
networking
math and statistics
dates and times
compression
debugging and testing
This chapter provides a practical overview of the most important modules you’ll frequently use.
Operating System Interfaces: os and sys
os lets you interact with the underlying operating system.
sys provides access to interpreter-level features.
import os, sys
print(os.getcwd()) # current working directory
print(os.listdir(".")) # list files
print(sys.version) # python version
print(sys.argv) # command-line arguments
File and Path Handling: pathlib
The modern and recommended way to work with filesystem paths.
from pathlib import Path
p = Path("notes.txt")
print(p.exists())
print(p.read_text())
Mathematics: math, statistics, random
The math module provides fast mathematical functions.
statistics handles averages, medians, variance, etc.
random produces random numbers.
import math, statistics, random
print(math.sqrt(9))
print(statistics.mean([1,2,3,4]))
print(random.randint(1, 10))
Dates and Times: datetime
from datetime import datetime, timedelta
now = datetime.now()
print(now)
print(now + timedelta(days=3))
Collections: collections
Provides powerful data structures beyond built-in lists/dicts.
from collections import Counter, defaultdict, deque
c = Counter("banana")
print(c) # character frequencies
d = defaultdict(int)
d["a"] += 1 # no KeyError
queue = deque([1,2,3])
queue.appendleft(0)
print(queue)
Working with JSON: json
import json
data = {"name": "Alice", "age": 25}
text = json.dumps(data)
print(text)
loaded = json.loads(text)
print(loaded)
Regular Expressions: re
import re
pattern = r"\d+"
print(re.findall(pattern, "The year is 2025"))
Working with the Internet: urllib.request, http.client
from urllib import request
with request.urlopen("https://example.com") as f:
html = f.read()
print(len(html))
def cache(func):
stored = {}
@functools.wraps(func)
def inner(x):
if x not in stored:
stored[x] = func(x)
return stored[x]
return inner
@cache
def square(n):
print("Computing...")
return n * n
print(square(5)) # computed
print(square(5)) # from cache
Wrappers vs Decorators
A decorator is simply a shortcut for applying a wrapper.
Without decorator:
wrapped = wrapper(func)
With decorator:
@wrapper
def func(): ...
Thus: All decorators use wrappers, but not all wrappers need decorators.
Using Wrappers With Methods (Classes)
Wrappers also work on methods, but remember: the first argument is self.
import time
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
value = func(*args, **kwargs)
end = time.perf_counter()
print(f"{func.__name__} took {end-start:.5f}s")
return value
return wrapper
3. Memoization / Caching
def cache(func):
stored = {}
@functools.wraps(func)
def wrapper(x):
if x not in stored:
stored[x] = func(x)
return stored[x]
return wrapper
4. Authorization (permissions)
def require_admin(func):
@functools.wraps(func)
def wrapper(user, *args, **kwargs):
if user != "admin":
raise PermissionError("Not allowed")
return func(user, *args, **kwargs)
return wrapper
5. Retry Decorator
import time
def retry(n):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
for i in range(n):
try:
return func(*args, **kwargs)
except Exception as e:
print(f"Retry {i+1}/{n}: {e}")
time.sleep(1)
raise RuntimeError("Failed after retries")
return wrapper
return decorator
Decorators for Asynchronous Functions
def async_decorator(func):
@functools.wraps(func)
async def wrapper(*args, **kwargs):
print("Before await")
result = await func(*args, **kwargs)
print("After await")
return result
return wrapper
You can decorate entire classes (not just methods).
def make_all_methods_debugged(cls):
for name, value in cls.__dict__.items():
if callable(value):
setattr(cls, name, debug(value))
return cls
@make_all_methods_debugged
class A:
def hello(self): print("Hello")
def world(self): print("World")
Common Built-In Decorators
Decorator
Purpose
@staticmethod
Method that has no self
@classmethod
Method that receives cls instead of self
@property
Turn methods into computed attributes
@functools.lru_cache
Built-in caching decorator
Summary
Concept
Meaning
Decorator
Function that wraps another function to add behavior
Wrapper
Inner function that surrounds the original function
Inside main() the event loop is already running, so await is legal.
Key Difference: Where You Can Use Them
asyncio.run() can be called only from normal synchronous code (where no event loop is currently running):
# OK – top-level script
if __name__ == "__main__":
asyncio.run(main())
await can be used only:
inside async def
when an event loop is already running
# This is NOT allowed:
result = await compute() # SyntaxError outside async def
In plain Python scripts, you never write await directly at top-level.
Example: Using Both Together Correctly
import asyncio
async def fetch_data():
print("Fetching...")
await asyncio.sleep(1)
print("Done!")
return {"data": 123}
async def main():
# We are already inside async world
result = await fetch_data()
print("Result from fetch:", result)
# Bridge from sync world to async world
if __name__ == "__main__":
asyncio.run(main())
Flow:
Script starts in sync world.
asyncio.run(main()) enters async world.
await fetch_data() stays inside async world, waiting for another coroutine.
When done, asyncio.run exits async world and returns control.
You Cannot Useasyncio.run()Inside a Running Event Loop
If you try to call asyncio.run() from inside an async function (where an event loop is already running), you get an error:
async def inner():
# WRONG: this will raise
asyncio.run(other_coroutine()) # RuntimeError: asyncio.run() cannot be called from a running event loop
Inside async code, you must use await, not asyncio.run():
import ctypes
from ctypes import c_int
# Load the shared library
lib = ctypes.CDLL("./libmymath.so")
# Declare argument and return types
lib.add.argtypes = (c_int, c_int)
lib.add.restype = c_int
result = lib.add(2, 5)
print(result) # 7
Key points:
ctypes.CDLL loads the shared library.
argtypes and restype define how Python should convert arguments and results.
Works on Windows (via .dll) and macOS too (with .dylib / .so).
Passing Pointers and Arrays withctypes
You can also pass arrays, structs, and pointers.
/* sum_array.c */
int sum_array(const int* data, int n) {
int s = 0;
for (int i = 0; i < n; ++i) {
s += data[i];
}
return s;
}
Compile to shared library, then:
import ctypes
from ctypes import c_int
lib = ctypes.CDLL("./libsumarray.so")
lib.sum_array.argtypes = (ctypes.POINTER(c_int), c_int)
lib.sum_array.restype = c_int
# Create C array from Python list
arr = (c_int * 4)(1, 2, 3, 4)
result = lib.sum_array(arr, 4)
print(result) # 10
Here, Python passes the pointer to the array into the C function.
Calling C withcffi (C Foreign Function Interface)
cffi is a 3rd-party library that aims to be:
more C-like
cleaner than raw ctypes
Install:python3 -m pip install cffi
Example (using the same int add(int, int)):
from cffi import FFI
ffi = FFI()
ffi.cdef("int add(int a, int b);")
lib = ffi.dlopen("./libmymath.so")
result = lib.add(2, 3)
print(result) # 5
Key benefits:
You describe C functions with C-like declarations (cdef).
cffi handles conversions.
Using Cython as a “Friendly Frontend” to C
Cython lets you write Python-like syntax, then compiles it to C.
Good for:
gradually accelerating Python code
calling C functions easily
Example: calling a C function from Cython.
/* cmathlib.h */
int add(int a, int b);
/* cmathlib.c */
int add(int a, int b) {
return a + b;
}
Cython file mymodule.pyx:
# mymodule.pyx
cdef extern from "cmathlib.h":
int add(int a, int b)
def py_add(a: int, b: int) -> int:
return add(a, b)
Then build with a Cython-aware setup.py, and use:
import mymodule
print(mymodule.py_add(3, 4)) # 7
Which Method Should I Use?
Scenario
Recommended Method
Reason
You already have a compiled C library (.so / .dll)
ctypes or cffi
No need to touch the C code; write bindings in Python.
You want deep integration with Python objects
C extension (Python C API)
Maximum control, can define new Python types.
You want speed but prefer Python-like syntax
Cython
Gradual optimization, simpler than raw C API.
You want fine control of ABI/API with clean C syntax
cffi
C-like declarations with a nice interface.
Python Threading
What Is Threading in Python?
Threading is a way to run multiple operations concurrently within the same process.
A thread is a lightweight execution unit that shares:
memory
variables
resources
with other threads of the same process.
Important: In CPython, the Global Interpreter Lock (GIL) prevents true parallel execution of Python bytecode.
However, threads still run concurrently during I/O.
Importing the Threading Module
import threading
Creating and Starting a Thread
Threads can run any function.
import threading
import time
def task():
print("Task started")
time.sleep(2)
print("Task finished")
t = threading.Thread(target=task)
t.start() # start the thread
t.join() # wait for the thread to finish
Parameter meanings:
target → the function to execute in the thread
start() → actually runs the thread
join() → blocks until the thread finishes
Passing Arguments to a Thread
def greet(name):
print(f"Hello, {name}")
t = threading.Thread(target=greet, args=("Junzhe",))
t.start()
args=() must be a tuple.
You can also use kwargs={} for named arguments.
Creating a Thread by Subclassing Thread
class Worker(threading.Thread):
def run(self):
print("Worker running")
w = Worker()
w.start()
Recommended only for more complex behaviors.
Daemon Threads
A daemon thread runs in the background and does NOT block program exit.
def background():
while True:
print("Running...")
time.sleep(1)
t = threading.Thread(target=background, daemon=True)
t.start()
When the main thread exits, daemon threads are killed immediately.
Good for:
logging threads
background monitoring
cleanup tasks
The Global Interpreter Lock (GIL)
CPython has a global lock that ensures only one thread executes Python bytecode at a time.
Implications:
CPU-bound tasks → NOT faster with threads
I/O-bound tasks → benefit greatly
Examples of I/O-bound tasks that benefit:
HTTP requests
database queries
reading/writing files
sleeping
For CPU-bound tasks, use:
multiprocessing
or C extensions
Thread Synchronization with Lock
A Lock prevents race conditions when multiple threads modify shared data.
lock = threading.Lock()
counter = 0
def increment():
global counter
for _ in range(100000):
with lock: # lock acquired
counter += 1 # safe update
threads = [threading.Thread(target=increment) for _ in range(5)]
for t in threads: t.start()
for t in threads: t.join()
print(counter)
Without locks, the result would be incorrect.
Other Synchronization Primitives
Python threading provides several powerful tools:
Primitive
Description
Use Case
Lock
Mutual exclusion
Avoid race conditions
RLock
Reentrant lock (same thread may lock multiple times)
Recursive functions
Semaphore
Limit number of concurrent threads
Connection pools
Event
Send signals between threads
Thread coordination
Condition
Complex wait/notify patterns
Producer–consumer
Barrier
Block threads until all have reached the same point
Parallel stages
Queue: The Best Way to Pass Data Between Threads
queue.Queue is thread-safe and used heavily in multi-threaded programs.
from queue import Queue
import threading
q = Queue()
def producer():
for i in range(5):
q.put(i)
def consumer():
while True:
item = q.get()
print("Consumed", item)
q.task_done()
threading.Thread(target=producer).start()
threading.Thread(target=consumer, daemon=True).start()
q.join()
Queues eliminate the need for manual locks.
Comparison: Threading vs Multiprocessing vs Asyncio