Blog
-
How to Land Your First Python Developer Job: Skills, Projects, and Where to Look
Getting your first Python developer role can feel like a mountain climb — many junior developers wonder: "Do I have enough experience? What should I build? Where even are these jobs?" The good news: with the right approach, you can break in. This guide walks you through the essential skills, project ideas, and job-search strategies to help you get your foot in the door.
1. Cultivate the core skills employers expect
When hiring for early-level Python roles, most employers look less at "years of experience" and more at foundational competence. As one hiring manager put it, "I don’t care about the formality of your education — I look for portfolio and skills, not degrees." Boot.dev Blog
Here are the core areas to focus on:
Skill area Why it matters What to learn / practice Python fundamentals & syntax You need to write bug-free code; mastery of control flow, data types, exceptions, etc. Work through exercises, coding challenge sites, small scripts Data structures & algorithms basics Many interviews include algorithmic questions Linked lists, stacks, sorting, search, recursion Object-oriented programming & modules Real software is rarely flat scripts; modules, classes, and reuse are key Build small class hierarchies, break code into modules Working with libraries & frameworks Employers expect you to "know the ecosystem," not reinvent wheels For web dev: Flask/Django; for data roles: NumPy, Pandas, etc. Databases & SQL / NoSQL Most apps store data — being comfortable reading and writing to a DB is essential Practice CRUD (Create, Read, Update, Delete) operations Version control (Git) Almost every development team uses Git Use Git for your projects, host code on GitHub / GitLab Testing & debugging skills Clean, reliable code is preferred; bug-hunting is part of real work Use pytest / unittest, practice writing tests, use debugging tools This aligns with consensus advice from career guides: you don’t necessarily need a formal degree — you need demonstrable capability and real work you can show.
2. Build a portfolio that speaks louder than a resume
A solid portfolio helps employers see you’ve done real things. Focus on quality over quantity.
Project ideas to include:
-
Web app (Flask or Django): A small CRUD app (to-do list, blog, user registration) shows you can build and integrate components.
-
Data analysis / visualization: Scrape or gather data and present insights (cleaning, analysis, charts).
-
API client / microservice: Build a small REST API or consume an external API (e.g. fetching data, caching, handling errors).
-
Open-source contribution or plugin: A small bug fix, documentation addition, or helpful plugin to an existing project.
-
Automation scripts / tools: E.g. parsing files, automating tasks, small bots — these show you can use Python in real life.
When you host projects, include a clear README, instructions to run them, and highlight the parts you’re proud of. This makes it easy for recruiters or hiring managers to evaluate your work.
As Coding Temple puts it: "focus on building a strong portfolio that showcases your projects ... select two or three that are high quality."
Additionally, documenting your learning process (via a blog or GitHub annotations) helps — it shows you’re reflective, serious, and growing.
3. Sharpen your interviewing & soft skills
Even with good code, how you present yourself matters.
-
Practice algorithm / whiteboard-style questions. Use sites like LeetCode, HackerRank — even at junior level you’ll see simple problems.
-
Behavioral interview prep. Be ready to talk about tradeoffs, challenges, failures, how you fixed bugs.
-
Code review mindset. Be ready to receive feedback, show humility, and learn.
-
Communication & clarity. Often, interviewers test how well you explain your code or logic.
4. Where to Find Python Jobs (and Why Some Are Better)
Once your skills and portfolio are in place, you need to find good, relevant opportunities. Below is a curated list of platforms and strategies — plus why niche boards often outperform general ones.
4.1 General job boards & aggregators
-
Indeed — large and broad; good for volume, less for specificity.
-
LinkedIn Jobs — good visibility and network overlap.
-
Dice — tech-focused board that often lists developer roles.
-
Google for Jobs / aggregators — meta platforms that pull listings from many sources.
These platforms are good for seeing volume and trends, but you’ll often compete with large applicant pools.
4.2 Python / tech-specialized job boards
Using more focused job boards helps you reach employers who are specifically hiring Python talent.
-
Python.org Jobs — the official board maintained by the Python community.
-
PyJobs — a job board focused exclusively on Python roles.
-
Python Jobs HQ — curated Python job listings.
-
PyCoder’s Jobs, Django Jobs, Remote Python — other niche boards listed among Python-specific resources.
These are gold mines, especially for roles that are less advertised on general boards.
4.3 Niche / regional / vertical job boards & platforms
Depending on your location or domain, niche outlets can have less competition and more relevance.
-
Authentic Jobs — popular in tech circles for dev/design roles.
-
Remote-only platforms — e.g. Remote Python, or boards that specifically filter remote / hybrid roles.
-
Local/regional boards — always check your city or country; sometimes great roles never leave local channels.
5. Why Talyti Makes Sense as a Go-to Resource
When you’re actively job-hunting, having a hub of relevant, up-to-date listings can save time and surface better fits. That’s where Talyti comes in — it aggregates curated job postings (including Python and tech roles) and provides filters to help you find opportunities aligned with your level and domain interests.
Putting Talyti in your toolbelt means you don’t always have to wade through unrelated noise — you can go straight to what’s most relevant. Many Python job seekers use boards like Talyti in addition to the niche boards above to broaden their reach while staying focused.
6. Applying smartly (not wildly)
-
Tailor your applications. Don’t just copy-paste; tweak your cover letter and resume to emphasize skills that match the job description.
-
Include your portfolio links up front. A short list of 2–3 project links is better than one generic link.
-
Follow up politely. If you don’t hear back, a 1–2 sentence follow-up (after ~1 week) is fine.
-
Take smaller roles / internships / freelance gigs. These build experience, credibility, and references.
-
Network & referrals. Engage in Python meetups, online communities. A referral or internal lead can push your application up the pile. stratascratch.com
7. Timeline & mindset expectations
-
Breaking into a paid Python developer role typically takes months, not days.
-
The learning curve steepens as you move from toy problems to real-world software.
-
Expect rejections. Every "no" is an opportunity to improve your approach or your code.
-
Iterate — update your resume, iterate projects, refactor code, sharpen interviews.
As Springboard recommends, think of your journey as a roadmap: start with fundamentals, then add projects, then apply. springboard.com
8. Sample Outline / Milestone Plan
Stage Goal Actions Month 1 Master basics Complete Python syntax drills, small scripts, exercises Month 2 Intermediate projects Build one web app + one data script, host them publicly Month 3 Polish & interview prep Add tests, documentation, practice problems, behavioral stories Month 4+ Apply & iterate Submit applications weekly, refine based on feedback, network
Conclusion
Landing your first Python developer job may seem daunting, but it’s entirely feasible with the right focus:
-
Master your foundational skills — not superficially, but deeply.
-
Build a clean, meaningful portfolio that showcases real work.
-
Prepare for interviews & communication.
-
Use a mix of general and niche job boards — including Talyti — to spot opportunities.
-
Apply thoughtfully, learn from feedback, and stay persistent.
With each project, each application, you become stronger. The path isn’t straight, but with dedication, you will land your first Python role.
-
-
The Function II: Python Function Decorators
Function decorators enable the addition of new functionality to a function without altering the function’s original functionality. Prior to reading this post, it is important that you have read and understood the first installment on python functions. The major take away from that tutorial is that python functions are first class objects; a result of this is that:
- Python functions can be passed as arguments to other functions.
- Python functions can be returned from other function calls.
- Python functions can be defined inside other functions resulting in closures.
The above listed properties of python functions provide the foundation needed to explain function decorators. Put simply, function decorators are "wrappers" that let you execute code before and after the function they decorate without modifying the function itself. The structure of this tutorial follows an excellent stack overflow answer to a question on explaining python decorators.
Function Decorators
Function decorators are not unique to python so to explain them, we ignore python function decorator syntax for the moment and instead focus on the essence of function decorators. To understand what decorators do, we implement a very trivial function that is decorated with another trivial function that logs calls to the decorated function. The function decoration is achieved using function composition as shown below (follow the comments):
import datetime # decorator expects another function as argument def logger(func_to_decorate): # A wrapper function is defined on the fly def func_wrapper(): # add any pre original function execution functionality print("Calling function: {} at {}".format(func_to_decorate.__name__, datetime.datetime.now())) # execute original function func_to_decorate() # add any post original function execution functionality print("Finished calling : {}".format(func_to_decorate.__name__)) # return the wrapper function defined on the fly. Body of the # wrapper function has not been executed yet but a closure # over the func_to_decorate has been created. return func_wrapper def print_full_name(): print("My name is John Doe") >>>decorated_func = logger(print_full_name) >>>decorated_func # the returned value, decorated_func, is a reference to a func_wrapper <function func_wrapper at 0x101ed2578> >>>decorated_func() # decorated_func call output Calling function: print_full_name at 2015年01月24日 13:48:05.261413 # the original functionality is preserved My name is John Doe Finished calling : print_full_nameIn the trivial example defined above, the decorator adds a new feature, printing some information before and after the original function call, to the original function without altering it. The decorator,
loggertakes a function to be decorated,print_full_nameand returns a function,func_wrapperthat calls the decorated function,print_full_name, when it is executed. The function returned,func_wrapperis closed over the reference to the decorated function,print_full_nameand thus can invoke the decorated function when it is executing. In the above, callingdecorated_funcresults inprint_full_namebeing executed in addition to some other code snippet that implement new functionality. This ability to add new functionality to a function without modifying the original function is the essence of function decorators. Once this concept is understood, the concept of decorators is understood.Python Decorators
Now that we hopefully understand the essence of function decorators, we move on to deconstructing python constructs that enable us to define decorators more easily. The previous section describes the essence of decorators but having to use decorators via function compositions as described is cumbersome. Python introduces the
@symbol for decorating functions. Decorating a function using python decorator syntax is achieved as shown below:@decorator def a_stand_alone_function(): passCalling
stand_alone_functionnow is equivalent to callingdecorated_funcfunction from the previous section but we no longer have to define the intermediatedecorated_func.Note that decorators can be applied not just to python functions but also to python classes and class methods but we discuss class and method decorators in a later tutorial.
It is important to understand what the
@symbol does with respect to decorators in python. The@decoratorline does not define a python decorator rather one can think of it as syntactic sugar for decorating a function.
I like to define decorating a function as the process of applying an existing decorator to a function. The decorator is the actual function,decoratorthat adds the new functionality to the original function. According to PEP 318, the following decorator snippet@dec2 @dec1 def func(arg1, arg2, ...): passis equivalent to
def func(arg1, arg2, ...): pass func = dec2(dec1(func))without the intermediate
funcargument. In the above,@dec1and@dec2are the decorator invocations. Stop, think carefully and ensure you understand this.dec1anddec2are function object references and these are the actual decorators. These values can even be replaced by any function call or a value that when evaluated returns a function that takes another function. What is of paramount importance is that the name reference following the@symbol is a reference to a function object (for this tutorial we assume this should be a function object but in reality it should be a callable object) that takes a function as argument. Understanding this profound fact will help in understanding python decorators and more involved decorator topics such as decorators that take arguments.Function Arguments For Decorated Functions
Arguments can be passed to functions that are being decorated by simply passing this function into the function that wraps, i.e the inner function returned when the decorator is invoked, the decorated function. We illustrate this with an example below:
import datetime # decorator expects another function as argument def logger(func_to_decorate): # A wrapper function is defined on the fly def func_wrapper(*args, **kwargs): # add any pre original function execution functionality print("Calling function: {} at {}".format(func_to_decorate.__name__, datetime.datetime.now())) # execute original function func_to_decorate(*args, **kwargs) # add any post original function execution functionality print("Finished calling : {}".format(func_to_decorate.__name__)) # return the wrapper function defined on the fly. Body of the # wrapper function has not been executed yet but a closure over # the func_to_decorate has been created. return func_wrapper @logger def print_full_name(first_name, last_name): print("My name is {} {}".format(first_name, last_name)) print_full_name("John", "Doe") Calling function: print_full_name at 2015年01月24日 14:36:36.691557 My name is John Doe Finished calling : print_full_nameNote how we use
*argsand**kwargsin defining the inner wrapper function; this is for the simple reason that we cannot know before hand what arguments are being passed to a function being decorated.Decorator Function with Function Arguments
We can also pass arguments to the actual decorator function but this is more involved than the case of passing functions to decorated functions. We illustrate this with quite an example below:
# this function takes arguments and returns a function. # the returned functions is our actual decorator def decorator_maker_with_arguments(decorator_arg1): # this is our actual decorator that accepts a function def decorator(func_to_decorate): # wrapper function takes arguments for the decorated # function def wrapped(function_arg1, function_arg2) : # add any pre original function execution # functionality print("Calling function: {} at {} with decorator arguments: {} and function arguments:{} {}".\ format(func_to_decorate.__name__, datetime.datetime.now(), decorator_arg1, function_arg1, function_arg2)) func_to_decorate(function_arg1, function_arg2) # add any post original function execution # functionality print("Finished calling : {}".format(func_to_decorate.__name__)) return wrapped return decorator @decorator_maker_with_arguments("Apollo 11 Landing") def print_name(function_arg1, function_arg2): print ("My full name is -- {} {} --".format(function_arg1, function_arg2)) >>> print_name("Tranquility base ", "To Houston") Calling function: print_name at 2015年01月24日 15:03:23.696982 with decorator arguments: Apollo 11 Landing and function arguments:Tranquility base To Houston My full name is -- Tranquility base To Houston -- Finished calling : print_nameAs mentioned previously, the key to understanding what is going on with this is to note that we can replace the reference value following the
@in a function decoration with any value that evaluates to a function object that takes another function as argument. In the above, the value returned by the function call,decorator_maker_with_arguments("Apollo 11 Landing"), is the decorator. The call evaluates to a function,decoratorthat accepts a function as argument. Thus the decoration ‘@decorator_maker_with_arguments("Apollo 11 Landing")’ is equivalent to@decoratorbut with the decorator,decorator, closed over the argument,Apollo 11 Landingby thedecorator_maker_with_argumentsfunction call. Note that the arguments supplied to a decorator can not be dynamically changed at run time as they are executed on script import.Functools.wrap
Using decorators involves swapping out one function for another. A result of this is that meta information such as docstrings in the swapped out function are lost when using a decorator with such function. This is illustrated below:
import datetime # decorator expects another function as argument def logger(func_to_decorate): # A wrapper function is defined on the fly def func_wrapper(): # add any pre original function execution functionality print("Calling function: {} at {}".format(func_to_decorate.__name__, datetime.datetime.now())) # execute original function func_to_decorate() # add any post original function execution functionality print("Finished calling : {}".format(func_to_decorate.__name__)) # return the wrapper function defined on the fly. Body of the # wrapper function has not been executed yet but a closure # over the func_to_decorate has been created. return func_wrapper @logger def print_full_name(): """return john doe's full name""" print("My name is John Doe") >>> print(print_full_name.__doc__) None >>> print(print_full_name.__name__) func_wrapperIn the above example an attempt to print the documentation string returns
Nonebecause the decorator has swapped out theprint_full_namefunction with thefunc_wrapperfunction that has no documentation string.
Even the function name now references the name of the wrapper function rathe than the actual function. This, most times, is not what we want when using decorators. To work around this pythonfunctoolsmodule provides thewrapsfunction that also happens to be a decorator. This decorator is applied to the wrapper function and takes the function to be decorated as argument. The usage is illustrated below:import datetime from functools import wraps # decorator expects another function as argument def logger(func_to_decorate): @wraps(func_to_decorate) # A wrapper function is defined on the fly def func_wrapper(*args, **kwargs): # add any pre original function execution functionality print("Calling function: {} at {}".format(func_to_decorate.__name__, datetime.datetime.now())) # execute original function func_to_decorate(*args, **kwargs) # add any post original function execution functionality print("Finished calling : {}".format(func_to_decorate.__name__)) # return the wrapper function defined on the fly. Body of the # wrapper function has not been executed yet but a closure over # the func_to_decorate has been created. return func_wrapper @logger def print_full_name(first_name, last_name): """return john doe's full name""" print("My name is {} {}".format(first_name, last_name)) >>> print(print_full_name.__doc__) return john doe's full name >>>print(print_full_name.__name__) print_full_nameApplications of Decorators
Decorators have a wide variety of applications in python and these can not all be covered in this article. Some examples of applications of decorators include:
- Memoization which is the caching of values to prevent recomputing such values if the computation is expensive; A memoization decorator can be used to decorate a function that performs the actual calculation and the added feature is that for a given argument if the result has been computed previously then the stored value is returned but if it has not then it is computed and stored before returned to the caller.
- In web applications, decorators can be used to protect end points that require authentication; an endpoint is protected with a decorator that checks that a user is authenticated when a request is made to the endpoint. Django a popular web application framework makes use of decorators for managing caching and views permissions.
- Decorators can also provide a clean way for carrying out household tasks such as logging function calls, timing functions etc.
The use of decorators is a very large playing field that is unique to different situations. The python decorator library provides a wealth of use cases of python decorators. Browsing through this collection will provide insight into practical uses of python decorators.
Further Reading
-
Python Comprehensions
Python comprehensions are syntactic constructs that enable sequences to be built from other sequences in a clear and concise manner. Python comprehensions are of three types namely:
- list comprehensions,
- set comprehensions and
- dict comprehensions.
List comprehension constructs have been part of python since python 2.0 while set and dict comprehensions have been part of python since python 2.7.
List Comprehensions
List comprehensions are by far the most popular python comprehension construct. List comprehensions provide a concise way to create new list of elements that satisfy a given condition from an iterable. An iterable is any python construct that can be looped over. Examples of inbuilt iterables include lists, sets and tuples. The example below from the python documentation illustrates the usage of list comprehensions. In this example, we want to create a list of squares of the numbers from 0 to 10. One conventional way of creating this list without comprehensions is given below :
>>> squares = [] >>> for x in range(10): ... squares.append(x**2) ... >>> squares [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]The same list can be created in a more concise manner using list comprehensions as below:
>>> squares = [x**2 for x in range(10)]The comprehension version is obviously clearer and more concise than the conventional method.
According to the python documentation, a list comprehension consists of square brackets containing an expression followed by a for clause and zero or more for or if clauses as shown below.
[expression for item1 in iterable1 if condition1 for item2 in iterable2 if condition2 ... for itemN in iterableN if conditionN ]The result is a new list resulting from evaluating the expression in the context of the for and if clauses which follow it. For example to create a list of the squares of even numbers between 0 and 10, the following comprehension is used:
>>> even_squares = [i**2 for i in range(10) if i % 2 == 0] >>> even_squares [0, 4, 16, 36, 64]The expression
i**2is computed in the context of the for clause that iterates over the numbers from 0 to 10 and the if clause that filters out non-even numbers.Nested for loops in List Comprehensions
List comprehensions can also be used with multiple or nested for loops. Consider for example, the simple code fragment shown below that creates a tuple from pair of numbers drawn from the two sequences given.
>>> combs = [] >>> for x in [1,2,3]: ... for y in [3,1,4]: ... if x != y: ... combs.append((x, y)) ... >>> combs [(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]The above can be rewritten in a more concise and simple manner as shown below using list comprehensions
>>> [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y] [(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]It is important to take into consideration the order of the for loops as used in the list comprehension. Careful observation of the code snippets using comprehension and that without comprehension shows that the order of the for loops in the comprehension follows the same order if it had been written without comprehensions. The same applies to nested for loops with nesting depth greater than two.
Nested List Comprehensions
List comprehensions can also be nested. Consider the following example drawn from the python documentation of a 3×4 matrix implemented as a list of 3 lists of length 4:
>>> matrix = [ ... [1, 2, 3, 4], ... [5, 6, 7, 8], ... [9, 10, 11, 12], ... ]Transposition is a matrix operation that creates a new matrix from an old one using the rows of the old matrix as the columns of the new matrix and the columns of the old matrix as the rows of the new matrix.
The rows and columns of the matrix can be transposed using the following nested list comprehension:>>> [[row[i] for row in matrix] for i in range(4)] [[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]The above is equivalent to the snippet given below:
>>> transposed = [] >>> for i in range(4): ... transposed.append([row[i] for row in matrix]) ... >>> transposed [[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]Set Comprehensions
Set comprehensions were added to python in version 2.7. In set comprehensions, we use the braces rather than
square brackets. For example, to create the set of the squares of all numbers between 0 and 10 the following set comprehensions can be used in lieu of regular looping:>>> x = {i**2 for i in range(10)} >>> x set([0, 1, 4, 81, 64, 9, 16, 49, 25, 36]) >>>Dict Comprehensions
Just like set comprehensions, dict comprehensions were added to python in version 2.7. Below we create a mapping of a number to its square using
dict comprehensions.>>> x = {i:i**2 for i in range(10)} >>> x {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81}Further Reading
- Python Documentation
- Python Essential Reference, Fourth Edition
- Python 3 Patterns, Recipes and Idioms
-
Classes and Objects I
In python, everything is an object. Classes provide the mechanism for creating new kinds of objects. In this tutorial, we ignore the basics of classes and object oriented programming and focus on topics that provide a better understanding of object oriented programming in python. It is assumed that we are dealing with new style classes. These are python classes that inherit from object super class.
Defining Classes
The
classstatement is used to define new classes. The class statement defines a set of attributes, variables and methods, that are associated with and shared by a collection of instances of such a class. A simple class definition is given below:class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return self.balanceClass definitions introduce the following new objects:
- Class object
- Instance object
- Method object
Class Objects
When a class definition is encountered during the execution of a program, a new namespace is created and this serves as the namespace into which all class variable and method definition name bindings go. Note that this namespace does not create a new local scope that can be used by class methods hence the need for fully qualified names when accessing variables in methods. The
Accountclass from the previous section illustrates this; methods trying to access thenum_of_accountsvariable must use the fully qualified name,Account.num_of_accountselse an error results as shown below when the fully qualified name is not used in the__init__method:class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return self.balance >>> acct = Account('obi', 10) Traceback (most recent call last): File "python", line 1, in <module> File "python", line 9, in __init__ UnboundLocalError: local variable 'num_accounts' referenced before assignmentAt the end of the execution of a class definition, a class object is created. The scope that was in effect just before the class definition was entered is reinstated, and the class object is bound here to the class name given in the class definition header.
A little diversion here, one may ask, if the class created is an object then what is the class of the class object ?. In accordance to the everything is an object philosophy of python, the class object does indeed have a class which it is created from and in the python new style class, this is the
typeclass.>>> type(Account) <class 'type'>So just to confuse you a bit, the type of a type, the Account type, is type. The type class is a metaclass, a class used for creating other classes and we discuss this in a later tutorial.
Class objects support attribute reference and instantiation. Attributes are referenced using the standard dot syntax of object followed by dot and then attribute name: obj.name. Valid attribute names are all the variable and method names present in the class’s namespace when the class object was created. For example:
>>> Account.num_accounts >>> 0 >>> Account.deposit >>> <unbound method Account.deposit>Class instantiation uses function notation. Instantiation involved calling the class object like a normal functions without parameter as shown below for the Account class:
>>> Account()After instantiation of a class object, an instance object is returned and the
__init__, if it has been defined in the class, is called with the instance object as the first argument. This performs any user defined initialization such as initializing instance variable values. In the case of theAccountclass the account name and balance are set and the number of instance objects is incremented by one.Instance Objects
If class objects are the cookie cutters then instance objects are the cookies that are the result of instantiating class objects. Attribute, data and method objects, references are the only operations that are valid on instance objects.
Method Objects
Method objects are similar to function objects. If
xis an instance of theAccountclass,x.depositis an example of a method object. Methods have an extra argument included in their definition, theselfargument. Thisselfargument refers to an instance of the class. Why do we have to pass an instance as an argument to a method? This is best illustrated by a method call:>>> x = Account() >>> x.inquiry() 10What exactly happens when an instance method is called? You may have noticed that
x.inquiry()is called without an argument above, even though the method definition forinquiry()requires theselfargument. What happened to this argument?The special thing about methods is that the object on which a method is being called is passed as the first argument of the function. In our example, the call to
x.inquiry()is exactly equivalent toAccount.f(x). In general, calling a method with a list of n arguments is equivalent to calling the corresponding function with an argument list that is created by inserting the method’s object before the first argument.The python tutorial says:
When an instance attribute is referenced that isn’t a data attribute, its class is searched. If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together in an abstract object: this is the method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.
The above applies to all instance method object including the
__init__method. The self argument is actually not a keyword and any valid argument name can be used as shown in the below definition for the Account class:class Account(object): num_accounts = 0 def __init__(obj, name, balance): obj.name = name obj.balance = balance Account.num_accounts += 1 def del_account(obj): Account.num_accounts -= 1 def deposit(obj, amt): obj.balance = obj.balance + amt def withdraw(obj, amt): obj.balance = obj.balance - amt def inquiry(obj): return obj.balance >>> Account.num_accounts >>> 0 >>> x = Account('obi', 0) >>> x.deposit(10) >>> Account.inquiry(x) >>> 10Static and Class Methods
All methods defined in a class by default operate on instances. However, one can define static or class methods by decorating such methods with the corresponding
@staticmethodor@classmethoddecorators.Static Methods
Static methods are normal functions that exist in the namespace of a class. Referencing a static method from a class shows that rather than an unbound method type, a function type is returned as shown below:
class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return "Name={}, balance={}".format(self.name, self.balance) @staticmethod def type(): return "Current Account" >>> Account.deposit <unbound method Account.deposit> >>> Account.type <function type at 0x106893668>To define a static method, the
@staticmethoddecorator is used and such methods do not require theselfargument. Static methods provide a mechanism for better organization as code related to a class are placed in that class and can be overridden in a sub-class as needed.Class Methods
Class methods as the name implies operate on classes themselves rather than instances. Class methods are created using the
@classmethoddecorator with theclassrather thaninstancepassed as the first argument to the method.import json class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return "Name={}, balance={}".format(self.name, self.balance) @classmethod def from_json(cls, params_json): params = json.loads(params_json) return cls(params.get("name"), params.get("balance")) @staticmethod def type(): return "Current Account"A motivating example of the usage of class methods is as a factory for object creation. Imagine data for the
Accountclass comes in different formats such as tuples, json string etc. We cannot define multiple__init__methods as a Python class can have only one__init__method so class methods come in handy for such situations. In theAccountclass defined above for example, we want to initialize an account from a json string object so we define a class factory method,from_jsonthat takes in a json string object and handles the extraction of parameters and creation of the account object using the extracted parameters. Another example of a class method in action is thedict.fromkeysmethods that is used for creating dict objects from a sequence of supplied keys and value.Python Special Methods
Sometimes we may want to customize user-defined classes. This may be either to change the way class objects are created and initialized or to provide polymorphic behavior for certain operations. Polymorphic behavior enables user-defined classes to define their own implementation for certain python operation such as the
+operation. Python provides special methods that enable this. These methods are normally of the form__*__where*refers to a method name. Examples of such methods are__init__and__new__for customizing object creation and initialization,__getitem__,__get__,__add__and__sub__for emulating in built python types,__getattribute__,__getattr__etc. for customizing attribute access etc. These are just a few of the special methods. We discuss a few important special methods below to provide an understanding but the python documentation provides a comprehensive list of these methods.Special methods for Object Creation
New class instances are created in a two step process using the
__new__method to create a new instance and the__init__method to initialize the newly created object. Users are already familiar with defining the__init__method; the__new__method is rarely defined by the user for each class but this is possible if one wants to customize the creation of class instances.Special methods for Attribute access
We can customize attribute access for class instances by implementing the following listed methods.
class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def __getattr__(self, name): return "Hey I dont see any attribute called {}".format(name) def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return "Name={}, balance={}".format(self.name, self.balance) @classmethod def from_dict(cls, params): params_dict = json.loads(params) return cls(params_dict.get("name"), params_dict.get("balance")) @staticmethod def type(): return "Current Account" x = Account('obi', 0)__getattr__(self, name)__: This method is only called when an attribute, name, that is referenced is neither an instance attribute nor is it found in the class tree for the object. This method should return some value for the attribute or raise anAttributeErrorexception. For example, if x is an instance of the Account class defined above, trying to access an attribute that does not exist will result in a call to this method.>>> acct = Account("obi", 10) >>> acct.number Hey I dont see any attribute called numberNote that If
__getattr__code references instance attributes, and those attributes do not exist, an infinite loop may occur because the__getattr__method is called successively without end.__setattr__(self, name, value)__: This method is called whenever an attribute assignment is attempted.__setattr__should insert the value being assigned into the dictionary of the instance attributes rather than usingself.name=valuewhich results in a recursive call and hence to an infinite loop.__delattr__(self, name)__: This is called wheneverdel objis called.__getattribute__(self, name)__: This method is always called to implement attribute accesses for instances of the class.
Special methods for Type Emulation
Python defines certain special syntax for use with certain types; for example, the elements in lists and tuples can be accessed using the index notation
[], numeric values can be added with the+operator and so on. We can create our own classes that make use of this special syntax by implementing certain special methods that the python interpreter calls whenever it encounters such syntax. We illustrate this with a very simple example below that emulates the basics of a python list.class CustomList(object): def __init__(self, container=None): # the class is just a wrapper around another list to # illustrate special methods if container is None: self.container = [] else: self.container = container def __len__(self): # called when a user calls len(CustomList instance) return len(self.container) def __getitem__(self, index): # called when a user uses square brackets for indexing return self.container[index] def __setitem__(self, index, value): # called when a user performs an index assignment if index <= len(self.container): self.container[index] = value else: raise IndexError() def __contains__(self, value): # called when the user uses the 'in' keyword return value in self.container def append(self, value): self.container.append(value) def __repr__(self): return str(self.container) def __add__(self, otherList): # provides support for the use of the + operator return CustomList(self.container + otherList.container)In the above, the
CustomListis a thin wrapper around an actual list. We have implemented some custom methods for illustration purposes:__len__(self): This is called when thelen()function is called on an instance of theCustomListas shown below:>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> len(myList) 4__getitem__(self, value): provides support for the use of square bracket indexing on an instance of the CustomList class as shown below:>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> myList[3] 4__setitem__(self, key, value): Called to implement the assignment of value to to self[key] on an instance of the CustomList class.>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> myList[3] = 100 4 >>> myList[3] 100__contains__(self, key): Called to implement membership test operators. Should return true if item is in self, false otherwise.>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> 4 in myList True__repr__(self): Called to compute the object representation for self when print is called with self as argument.>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> print myList [1, 2, 3, 4]__add__(self, otherList): Called to compute the addition of two instances of CustomList when the+operator is used to add two instances together.>>> myList = CustomList() >>> otherList = CustomList() >>> otherList.append(100) >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> myList + otherList + otherList [1, 2, 3, 4, 100, 100]
The above provide an example of how we can customize class behavior by defining certain special class methods. For a comprehensive listing of all such custom methods, see the python documentation. In a follow-up tutorial, we put all we have discussed about special methods together and explain descriptors, a very important functionality that has widespread usage in python object oriented programming.
Further Reading
- Python Essential Reference
- Python Data Model
-
Classes and Objects III: Metaclasses, Abstract Base Classes and Class Decorators
Metaclasses
Everything in python is an object including classes; if a class is an object then such class must have another class from which it is created.
Consider, an instance,f, of a user defined classFoo; we can find out the type/class of the instance,fby using the inbuilt method,typeand in this case it seen that the type offisFoo.>>> class Foo(object): ... pass ... >>> f = Foo() >>> type(f) <class '__main__.Foo'> >>>Given that everything in python is an object including classes, we can also introspect on a class object to find out the type/class for such class.
To illustrate this, we introspect on our previous class,Foo, using thetypeinbuilt method.class Foo(object): pass >>> type(Foo) <class 'type'>In new style classes such as that defined above, the class used for creating all other class objects is the
typeclass. This applies to user defined classes as shown above as well as in-built classes as shown below:>>>type(dict) <class 'type'>Classes such as the
typeclass that are used to create other classes are called metaclasses in python. That is all there is to metaclasses; they are classes that are used in creating other classes. Custom metaclasses are not often used in python but sometimes we want to control the way our classes are created; for example we may want to check that every method has some kind of documentation; this is where custom metaclasses come in handy.Before explaining how metaclasses are used to customize class creation, we look in detail at how the creation of python class objects happens when a class statement is encountered during the execution of a python script.
# class definition class Foo(object): def __init__(self, name): self.name = name def print_name(): print(self.name)The above snippet is the class definition for a simple class that every python user is familiar with but this is not the only way such class can be defined. The snippet below shows a more involved method for defining the same above class with all the syntactic sugar provided by the
classkeyword stripped away; the snippet provides a better understanding of what actually goes on under the covers during the execution of a python class:class_name = "Foo" class_parents = (object,) class_body = """ def __init__(self, name): self.name = name def print_name(self): print(self.name) """ # a new dict is used as local namespace class_dict = {} #the body of the class is executed using dict from above as local # namespace exec(class_body, globals(), class_dict) # viewing the class dict reveals the name bindings from class body >>>class_dict {'__init__': <function __init__ at 0x10066f8c8>, 'print_name': <function blah at 0x10066fa60>} # final step of class creation Foo = type(class_name, class_parents, class_dict)When a new class is defined, the body of the class is executed as a set of statements within its own namespace (its own dict). As a final step in the class creation process, the class object is then created by instantiating the
typemetaclass passing in the class name, base classes and dictionary as arguments. The above snippet shows how metaclasses comes into play during the class creation process but the above is not the way that classes are every defined rather they are defined with theclassstatements and it is here we want to control such metaclass.The metaclass used in the class creation can be explicitly specified by setting a
__metaclass__variable or supplying themetaclasskeyword argument in aclassdefinition. In the case that none of this is supplied, the class statement examines the first entry in the tuple of the the base classes if any. If no base classes are used, the global variable__metaclass__is searched for and if no value is found for this, python uses the default metaclass.Armed with a basic understanding of metaclasses, we illustrate how metaclasses can be of use to python programmers.
Metaclasses in Action
We can define custom metaclasses that can be used when creating classes. These custom metaclasses will normally inherit from
typeand re-implement certain methods such as__init__and__new__.We start with a trivial example.
Imagine that you are the chief architect for a shiny new project and you have diligently read dozens of software engineering books and style guides that have hammered on the importance of docstrings so you want to enforce the requirement that all non-private methods in the project must have docstrings; how would you enforce this requirement?
A simple and straightforward answer to this is to create a custom metaclass that will be used across the project that enforces this requirement. The snippet below though not production ready is an example of such a metaclass.
class DocMeta(type): def __init__(self, name, bases, attrs): for key, value in attrs.items(): # skip special and private methods if key.startswith("__"): continue # skip any non-callable if not hasattr(value, "__call__"): continue # check for a doc string. a better way may be to store # all methods without a docstring then throw an error showing # all of them rather than stopping on first encounter if not getattr(value, '__doc__'): raise TypeError("%s must have a docstring" % key) type.__init__(self, name, bases, attrs)We create a type subclass,
DocMeta, that overrides thetypeclass__init__method. The implemented__init__method iterates through all the class attributes searching for non-private methods missing a docstring; if such is encountered an exception is thrown as shown below.class Car(object): __metaclass__ = DocMeta def __init__(self, make, model, color): self.make = make self.model = model self.color = color def change_gear(self): print("Changing gear") def start_engine(self): print("Changing engine") car = Car() Traceback (most recent call last): File "abc.py", line 47, in <module> class Car(object): File "abc.py", line 42, in __init__ raise TypeError("%s must have a docstring" % key) TypeError: change_gear must have a docstringAnother trivial example that illustrates the use of python metaclasses is when we want to create a final class that is a class that cannot be sub-classed. Some people may feel that this is unpythonic but for illustration purposes we implement a metaclass enforcing this requirement below:
class final(type): def __init__(cls, name, bases, namespace): super(final, cls).__init__(name, bases, namespace) for klass in bases: if isinstance(klass, final): raise TypeError(str(klass.__name__) + " is final") >>> class B(object): ... __metaclass__ = final ... >>> class C(B): ... pass ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 6, in __init__ TypeError: B is finalIn the above example, the metaclass simply performs a check ensuring that the final class is never part of the base classes for any class being created.
There is confusion sometimes over whether to override either
__init__or__new__when defining metaclasses. The decision whether to use either of them depends on what we are trying to achieve with such custom metaclasses. If we are trying to modify the class by modifying some class attribute then we override the__new__method but when we are looking just to carry out checks such as we have done above then we override the__init__method of the metaclass.Abstract Base Classes
Sometimes, we want to enforce a contract between classes in our program. For example, we may want all classes of a given type to implement a set of methods and properties; this is accomplished by interfaces and abstract classes in statically typed languages like Java. In python we may create a base class with default methods and have all other classes inherit from them but what if we want each subclass to have its own implementation and we want to enforce this rule. We could define all the needed methods in a base class and have them raise
NotImplementedErrorexception then the subclasses have to implement these methods if they are going to be used. However this does not still solve the problem fully. We could have subclasses that don’t implement this method and one would not know till the method call was attempted at runtime. Another issue we may experience is that of a proxy object that passes on method calls to another object. Even if such an object implements all required methods of a type via its proxied object, anisinstancetest on such a proxy object for the proxied object will fail to produce the correct result.Python’s Abstract base classes provide a simple and elegant solution to these issues mentioned above. The abstract base class functionality is provided by the abc module. This module defines a metaclass and a set of decorators that are used in the creation of abstract base classes.
When defining an abstract base class we use theABCMetametaclass from theabcmodule as the metaclass for the abstract base class and then make use of the @abstractmethod and @abstractproperty decorators to create methods and properties that must be implemented by non-abstract subclasses. If a subclass doesn’t implement any of the abstract methods or properties then it is also an abstract class and cannot be instantiated as illustrated below:from abc import ABCMeta, abstractmethod class Vehicle(object): __metaclass__ = ABCMeta @abstractmethod def change_gear(self): pass @abstractmethod def start_engine(self): pass class Car(Vehicle): def __init__(self, make, model, color): self.make = make self.model = model self.color = color # abstract methods not implemented >>> car = Car("Toyota", "Avensis", "silver") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class Car with abstract methods change_gear, start_engine >>>Once, a class implements all abstract methods then that class becomes a concrete class and can be instantiated by a user.
from abc import ABCMeta, abstractmethod class Vehicle(object): __metaclass__ = ABCMeta @abstractmethod def change_gear(self): pass @abstractmethod def start_engine(self): pass class Car(Vehicle): def __init__(self, make, model, color): self.make = make self.model = model self.color = color def change_gear(self): print("Changing gear") def start_engine(self): print("Changing engine") >>> car = Car("Toyota", "Avensis", "silver") >>> print(isinstance(car, Vehicle)) TrueAbstract base classes also allow existing classes to registered as part of its hierarchy but it performs no check on whether those classes implement the methods and properties that have been marked as abstract. This provides a simple solution to the second issue raised in the opening paragraph. Now we can just register a proxy class with an abstract base class and
isinstancecheck will return the correct answer when used.from abc import ABCMeta, abstractmethod class Vehicle(object): __metaclass__ = ABCMeta @abstractmethod def change_gear(self): pass @abstractmethod def start_engine(self): pass class Car(object): def __init__(self, make, model, color): self.make = make self.model = model self.color = color >>> Vehicle.register(Car) >>> car = Car("Toyota", "Avensis", "silver") >>> print(isinstance(car, Vehicle)) TrueAbstract base classes are used a lot in python library. They provide a mean to group python objects such as number types that have a relatively flat hierarchy. The
collectionsmodule also contains abstract base classes for various kinds of operations involving sets, sequences and dictionaries.
Whenever we want to enforce contracts between classes in python just as interfaces do in java, abstract base classes is the way to go.Class Decorators
Just like functions can be decorated with other functions, classes can also be decorated in python. We decorate classes to add required functionality that maybe external to the class implementation; for example we may want to enforce the singleton pattern for a given class. Some functions implemented by class decorators can also implemented by metaclasses but class decorators sometimes make for a cleaner implementation to such functionality.
The most popular example used to illustrate the class decorators is that of a registry for class ids as they are created.
registry = {} def register(cls): registry[cls.__clsid__] = cls return cls @register class Foo(object): __clsid__ = "123-456" def bar(self): passAnother example of using class decorators is for implementing the singleton pattern as shown below:
def singleton(cls): instances = {} def get_instance(): if cls not in instances: instances[cls] = cls() return instances[cls] return get_instanceThe decorator defined above can be used to decorate any python class forcing that class to initialize a single instance of itself throughout the life time of the execution of the program.
@singleton class Foo(object): pass >>> x = Foo() >>> id(x) 4310648144 >>> y = Foo() >>> id(y) 4310648144 >>> id(y) == id(x) True >>>In the above example, we initialize the
Fooclass twice; when we compare the ids of both objects, we see that they refer to a single object of the class. The same functionality can be achieved using a metaclass by overriding the__call__method of the metaclass as shown below:class Singleton(type): _instances = {} def __call__(cls, *args, **kwargs): if cls not in cls._instances: cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) return cls._instances[cls] class Foo(object): __metaclass__ = Singleton >>> x = Foo() >>> y = Foo() >>> id(x) 4310648400 >>> id(y) 4310648400 >>> id(y) == id(x) TrueFurther Reading
- Python Essential Reference 4th Edition; David Beazley.
- Stack Overflow: Creating a singleton in python.
- Python Documentation: abc – Abstract Base Classes.
-
Intermediate Pythonista table of contents
Table of Contents
- Python comprehensions
- Introduction to Python Generators
- The Function
- The Function II: Python Function Decorators
- Classes and Objects
- Classes and Objects II: Descriptors
- Classes and Objects III: Types and Metaclasses
- Intermezzo I: A little Python History
What is this all about?
I have been working with python for close to five years and in all these years it has been a struggle to find tutorials or blogs that covered a predefined set of intermediate python programming language topics. Most of the tutorials are geared towards beginners or just cover a single advanced topic. I have therefore decided to make my own set of tutorials for python topics that I consider to be of intermediate level of difficulty.
If you have any feedback or suggestions, don’t hesitate to reach out to me on twitter.If you enjoyed the write-ups, why not check out my book Intermediate Python on leanpub.
-
Classes and Objects II: Descriptors
Descriptors are an esoteric but integral part of the python programming language. They are used widely in the core of the python language and a good grasp of descriptors provides a python programmer with an extra trick in his or her toolbox. To set the stage for the discussion of descriptors I describe some scenarios that a programmer may encounter in his or her daily programming activities; I then go ahead to explain what descriptors are and how they provide elegant solutions to these scenarios. For this writeup, I refer to python version using new style classes.
- Consider a program in which we need to enforce strict type checking for object attributes. Python is a dynamic languages and thus does not support type checking but this does not prevent us from implementing our own version of type checking regardless of how rudimentary it may be. The conventional way to go about type checking object attributes may take the form shown below:
def __init__(self, name, age): if isinstance(str, name): self.name = name else: raise TypeError("Must be a string") if isinstance(int, age): self.age = age else: raise TypeError("Must be an int")The above method is one method of enforcing such type checking but as the arguments increase in number it gets cumbersome. Alternatively, we could create a
type_check(type, val)function that is called in the__init__method before assignment but then how would we easily implement such checking when we want to set the attribute value somewhere else. A quick solution that comes to mind is thegettersandsetterspresent in Java but that is un-pythonic and cumbersome. - Consider a program in which we want to create attributes that are initialized once at run-time and then become read-only. One could also think of ways of implementing this using python special methods but once again such implementation would be unwieldy and cumbersome.
- Finally, imagine a program in which we wanted to somehow customize object attribute access. This maybe to log such attribute access for example. Once again, it is not too difficult to come up with a solution to this although such solution maybe unwieldy and not re-useable.
All the above mentioned issues are all linked together by the fact that they are all related to attribute references; we are trying to customize attribute access.
Python Descriptors
Descriptors provides solutions to the above listed issues that are elegant, simple, robust and re-useable. Simply put, a descriptor is an object that represents the value of an attribute. This means that if an account object has an attribute
name, a descriptor is another object that can be used to represent the value held by that attribute,name. A descriptor is any object that implements any of the__get__,__set__or__delete__special methods of the descriptor protocol. The signature for each of these methods is shown below:
"`python
descr.get(self, obj, type=None) –> valuedescr.__set__(self, obj, value) --> None descr.__delete__(self, obj) --> None ```Objects implementing the
__get__method are non-data descriptors meaning they can only be read from after initialization while objects implementing the__get__and__set__are data descriptors which means such attribute are writable.To get a better understanding of descriptors we provide descriptor based solution to the issues mentioned. Implementing type checking on an object attribute using python descriptors is then a very simple task. A decorator implementing this type checking is shown below:
class TypedProperty(object): def __init__(self, name, type, default=None): self.name = "_" + name self.type = type self.default = default if default else type() def __get__(self, instance, cls): return getattr(instance, self.name, self.default) def __set__(self,instance,value): if not isinstance(value,self.type): raise TypeError("Must be a %s" % self.type) setattr(instance,self.name,value) def __delete__(self,instance): raise AttributeError("Can't delete attribute") class Foo(object): name = TypedProperty("name",str) num = TypedProperty("num",int,42) >> acct = Foo() >> acct.name = "obi" >> acct.num = 1234 >> print acct.num 1234 >> print acct.name obi # trying to assign a string to number fails >> acct.num = '1234' TypeError: Must be a <type 'int'>In the example, we implement a descriptor,
TypedPropertyand this descriptor class enforces type checking for any attribute of a class which it is used to represent. It is important to note that descriptors can only be legally defined at the class level rather than instance level i.e. in__init__method as shown in the example above.When the any attribute of a
Fooclass instance is accessed, the descriptor calls its__get__method. Notice that the first argument to the__get__method is the object from which the attribute the descriptor represents is being referenced. When the attribute is assigned to, the descriptor calls its__set__method. To understand why descriptors can be used to represent object attributes, we need to understand the way attribute reference resolution is carried out in python. For objects, the mechanics for attribute resolution is inobject.__getattribute__(). This method transformsb.x into type(b).__dict__['x'].__get__(b, type(b)). The resolutions then searches for the attribute using a precedence chain that gives data descriptors found in class dict priority over instance variables, instance variables priority over non-data descriptors, and assigns lowest priority to getattr() if provided. This precedence chain can be overridden by defining custom__getattribute__methods for a given object class.With a firm understanding of the mechanics of descriptors, it is easy to imagine elegant solutions to the second and third issues raised in the previous section. Implementing a read only attribute with descriptors becomes a simple case of implementing a data descriptor i.e descriptor with no
__set__method`. The issue of customizing access, though trivial in this instance, would just involve adding the required functionality in the__get__and__set__methods.Class Properties
Having to define descriptor classes each time we want to use them is cumbersome. Python properties provide a concise way of adding data descriptors to attributes in python. A property signature is given below:
property(fget=None, fset=None, fdel=None, doc=None) -> property attributefget,fsetandfdelare the getter, setter and deleter methods for such class. We illustrate creating properties with an example below:class Accout(object): def __init__(self): self._acct_num = None def get_acct_num(self): return self._acct_num def set_acct_num(self, value): self._acct_num = value def del_acct_num(self): del self._acct_num acct_num = property(get_acct_num, set_acct_num, del_acct_num, "Account number property.")If
acctis an instance ofAccount,acct.acct_numwill invoke the getter,acct.acct_num = valuewill invoke the setter anddel acct_num.acct_numwill invoke the deleter.The property object and functionality can be implemented in python as illustrated in Descriptor How-To Guide using the descriptor protocol as shown below :
class Property(object): "Emulate PyProperty_Type() in Objects/descrobject.c" def __init__(self, fget=None, fset=None, fdel=None, doc=None): self.fget = fget self.fset = fset self.fdel = fdel if doc is None and fget is not None: doc = fget.__doc__ self.__doc__ = doc def __get__(self, obj, objtype=None): if obj is None: return self if self.fget is None: raise AttributeError("unreadable attribute") return self.fget(obj) def __set__(self, obj, value): if self.fset is None: raise AttributeError("can't set attribute") self.fset(obj, value) def __delete__(self, obj): if self.fdel is None: raise AttributeError("can't delete attribute") self.fdel(obj) def getter(self, fget): return type(self)(fget, self.fset, self.fdel, self.__doc__) def setter(self, fset): return type(self)(self.fget, fset, self.fdel, self.__doc__) def deleter(self, fdel): return type(self)(self.fget, self.fset, fdel, self.__doc__)Python also provides the
@propertydecorator that can be used to create read only attributes. A property object has getter, setter, and deleter decorator methods that can be used to create a copy of the property with the corresponding accessor function set to the decorated function. This is best explained with an example:class C(object): def __init__(self): self._x = None @property # the x property. the decorator creates a read-only property def x(self): return self._x @x.setter # the x property setter makes the property writeable def x(self, value): self._x = value @x.deleter def x(self): del self._xIf we wanted to make the property read-only then we would leave out the
settermethod.Descriptors see wide application in the python language itself. Python functions, class methods, static methods are all examples of non-data descriptors. Descriptor How-To Guide provides a basic description of how the listed python objects are implemented using descriptors.
Further Reading
- Descriptor How-To Guide
- Python Essential Reference 4th Edition; David Beazley
- Caching in python with a descriptor and a decorator
- Inside story on new style classes
- Consider a program in which we need to enforce strict type checking for object attributes. Python is a dynamic languages and thus does not support type checking but this does not prevent us from implementing our own version of type checking regardless of how rudimentary it may be. The conventional way to go about type checking object attributes may take the form shown below:
-
Classes and Objects I
In python, everything is an object. Classes provide the mechanism for creating new kinds of objects. In this tutorial, we ignore the basics of classes and object oriented programming and focus on topics that provide a better understanding of object oriented programming in python. It is assumed that we are dealing with new style classes. These are python classes that inherit from object super class.
Defining Classes
The
classstatement is used to define new classes. The class statement defines a set of attributes, variables and methods, that are associated with and shared by a collection of instances of such a class. A simple class definition is given below:class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return self.balanceClass definitions introduce the following new objects:
- Class object
- Instance object
- Method object
Class Objects
When a class definition is encountered during the execution of a program,
a new namespace is created and this serves as the namespace into which all class variable and method definition name bindings go. Note that this namespace does not create a new local scope that can be used by class methods hence the need for fully qualified names when accessing variables in methods. TheAccountclass from the previous section illustrates this; methods trying to access thenum_of_accountsvariable must use the fully qualified name,Account.num_of_accountselse an error results as shown below when the fully qualified name is not used in the__init__method:class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return self.balance >>> acct = Account('obi', 10) Traceback (most recent call last): File "python", line 1, in <module> File "python", line 9, in __init__ UnboundLocalError: local variable 'num_accounts' referenced before assignmentAt the end of the execution of a class definition, a class object is created. The scope that was in effect just before the class definition was entered is reinstated, and the class object is bound here to the class name given in the class definition header.
A little diversion here, one may ask, if the class created is an object then what is the class of the class object ?. In accordance to the everything is an object philosophy of python, the class object does indeed have a class which it is created from and in the python new style class, this is the
typeclass.>>> type(Account) <class 'type'>So just to confuse you a bit, the type of a type, the Account type, is type. The type class is a metaclass, a class used for creating other classes and we discuss this in a later tutorial.
Class objects support attribute reference and instantiation. Attributes are referenced using the standard dot syntax of object followed by dot and then attribute name: obj.name. Valid attribute names are all the variable and method names present in the class’s namespace when the class object was created. For example:
>>> Account.num_accounts >>> 0 >>> Account.deposit >>> <unbound method Account.deposit>Class instantiation uses function notation. Instantiation involved calling the class object like a normal functions without parameter as shown below for the Account class:
>>> Account()After instantiation of a class object, an instance object is returned and the
__init__, if it has been defined in the class, is called with the instance object as the first argument. This performs any user defined initialization such as initializing instance variable values. In the case of theAccountclass the account name and balance are set and the number of instance objects is incremented by one.Instance Objects
If class objects are the cookie cutters then instance objects are the cookies that are the result of instantiating class objects. Attribute, data and method objects, references are the only operations that are valid on instance objects.
Method Objects
Method objects are similar to function objects. If
xis an instance of theAccountclass,x.depositis an example of a method object.
Methods have an extra argument included in their definition, theselfargument. Thisselfargument refers to an instance of the class. Why do we have to pass an instance as an argument to a method? This is best illustrated by a method call:>>> x = Account() >>> x.inquiry() 10What exactly happens when an instance method is called? You may have noticed that
x.inquiry()is called without an argument above, even though the method definition forinquiry()requires theselfargument. What happened to this argument?The special thing about methods is that the object on which a method is being called is passed as the first argument of the function. In our example, the call to
x.inquiry()is exactly equivalent toAccount.f(x). In general, calling a method with a list of n arguments is equivalent to calling the corresponding function with an argument list that is created by inserting the method’s object before the first argument.The python tutorial says:
When an instance attribute is referenced that isn’t a data attribute, its class is searched. If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together in an abstract object: this is the method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.
The above applies to all instance method object including the
__init__method. The self argument is actually not a keyword and any valid argument name can be used as shown in the below definition for the Account class:class Account(object): num_accounts = 0 def __init__(obj, name, balance): obj.name = name obj.balance = balance Account.num_accounts += 1 def del_account(obj): Account.num_accounts -= 1 def deposit(obj, amt): obj.balance = obj.balance + amt def withdraw(obj, amt): obj.balance = obj.balance - amt def inquiry(obj): return obj.balance >>> Account.num_accounts >>> 0 >>> x = Account('obi', 0) >>> x.deposit(10) >>> Account.inquiry(x) >>> 10Static and Class Methods
All methods defined in a class by default operate on instances. However, one can define static or class methods by decorating such methods with the corresponding
@staticmethodor@classmethoddecorators.Static Methods
Static methods are normal functions that exist in the namespace of a class.
Referencing a static method from a class shows that rather than an unbound method type, a function type is returned as shown below:class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return "Name={}, balance={}".format(self.name, self.balance) @staticmethod def type(): return "Current Account" >>> Account.deposit <unbound method Account.deposit> >>> Account.type <function type at 0x106893668>To define a static method, the
@staticmethoddecorator is used and such methods do not require theselfargument. Static methods provide a mechanism for better organization as code related to a class are placed in that class and can be overridden in a sub-class as needed.Class Methods
Class methods as the name implies operate on classes themselves rather than instances. Class methods are created using the
@classmethoddecorator with theclassrather thaninstancepassed as the first argument to the method.import json class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return "Name={}, balance={}".format(self.name, self.balance) @classmethod def from_json(cls, params_json): params = json.loads(params_json) return cls(params.get("name"), params.get("balance")) @staticmethod def type(): return "Current Account"A motivating example of the usage of class methods is as a factory for object creation. Imagine data for the
Accountclass comes in different formats such as tuples, json string etc. We cannot define multiple__init__methods as a Python class can have only one__init__method so class methods come in handy for such situations. In theAccountclass defined above for example, we want to initialize an account from a json string object so we define a class factory method,from_jsonthat takes in a json string object and handles the extraction of parameters and creation of the account object using the extracted parameters. Another example of a class method in action is thedict.fromkeysmethods that is used for creating dict objects from a sequence of supplied keys and value.Python Special Methods
Sometimes we may want to customize user-defined classes. This may be either to change the way class objects are created and initialized or to provide polymorphic behavior for certain operations. Polymorphic behavior enables user-defined classes to define their own implementation for certain python operation such as the
+operation. Python provides special methods that enable this. These methods are normally of the form__*__where*refers to a method name. Examples of such methods are__init__and__new__for customizing object creation and initialization,__getitem__,__get__,__add__and__sub__for emulating in built python types,__getattribute__,__getattr__etc. for customizing attribute access etc. These are just a few of the special methods. We discuss a few important special methods below to provide an understanding but the python documentation provides a comprehensive list of these methods.Special methods for Object Creation
New class instances are created in a two step process using the
__new__method to create a new instance and the__init__method to initialize the newly created object. Users are already familiar with defining the__init__method; the__new__method is rarely defined by the user for each class but this is possible if one wants to customize the creation of class instances.Special methods for Attribute access
We can customize attribute access for class instances by implementing the following listed methods.
class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def del_account(self): Account.num_accounts -= 1 def __getattr__(self, name): return "Hey I dont see any attribute called {}".format(name) def deposit(self, amt): self.balance = self.balance + amt def withdraw(self, amt): self.balance = self.balance - amt def inquiry(self): return "Name={}, balance={}".format(self.name, self.balance) @classmethod def from_dict(cls, params): params_dict = json.loads(params) return cls(params_dict.get("name"), params_dict.get("balance")) @staticmethod def type(): return "Current Account" x = Account('obi', 0)__getattr__(self, name)__: This method is only called when an attribute, name, that is referenced is neither an instance attribute nor is it found in the class tree for the object. This method should return some value for the attribute or raise anAttributeErrorexception. For example, if x is an instance of the Account class defined above, trying to access an attribute that does not exist will result in a call to this method.>>> acct = Account("obi", 10) >>> acct.number Hey I dont see any attribute called numberNote that If
__getattr__code references instance attributes, and those attributes do not exist, an infinite loop may occur because the__getattr__method is called successively without end.__setattr__(self, name, value)__: This method is called whenever an attribute assignment is attempted.__setattr__should insert the value being assigned into the dictionary of the instance attributes rather than usingself.name=valuewhich results in a recursive call and hence to an infinite loop.__delattr__(self, name)__: This is called wheneverdel objis called.__getattribute__(self, name)__: This method is always called to implement attribute accesses for instances of the class.
Special methods for Type Emulation
Python defines certain special syntax for use with certain types; for example, the elements in lists and tuples can be accessed using the index notation
[], numeric values can be added with the+operator and so on. We can create our own classes that make use of this special syntax by implementing certain special methods that the python interpreter calls whenever it encounters such syntax. We illustrate this with a very simple example below that emulates the basics of a python list.class CustomList(object): def __init__(self, container=None): # the class is just a wrapper around another list to # illustrate special methods if container is None: self.container = [] else: self.container = container def __len__(self): # called when a user calls len(CustomList instance) return len(self.container) def __getitem__(self, index): # called when a user uses square brackets for indexing return self.container[index] def __setitem__(self, index, value): # called when a user performs an index assignment if index <= len(self.container): self.container[index] = value else: raise IndexError() def __contains__(self, value): # called when the user uses the 'in' keyword return value in self.container def append(self, value): self.container.append(value) def __repr__(self): return str(self.container) def __add__(self, otherList): # provides support for the use of the + operator return CustomList(self.container + otherList.container)In the above, the
CustomListis a thin wrapper around an actual list. We have implemented some custom methods for illustration purposes:__len__(self): This is called when thelen()function is called on an instance of theCustomListas shown below:>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> len(myList) 4__getitem__(self, value): provides support for the use of square bracket indexing on an instance of the CustomList class as shown below:>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> myList[3] 4__setitem__(self, key, value): Called to implement the assignment of value to to self[key] on an instance of the CustomList class.>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> myList[3] = 100 4 >>> myList[3] 100__contains__(self, key): Called to implement membership test operators. Should return true if item is in self, false otherwise.>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> 4 in myList True__repr__(self): Called to compute the object representation for self when print is called with self as argument.>>> myList = CustomList() >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> print myList [1, 2, 3, 4]__add__(self, otherList): Called to compute the addition of two instances of CustomList when the+operator is used to add two instances together.>>> myList = CustomList() >>> otherList = CustomList() >>> otherList.append(100) >>> myList.append(1) >>> myList.append(2) >>> myList.append(3) >>> myList.append(4) >>> myList + otherList + otherList [1, 2, 3, 4, 100, 100]
The above provide an example of how we can customize class behavior by defining certain special class methods. For a comprehensive listing of all such custom methods, see the python documentation.
In a follow-up tutorial, we put all we have discussed about special methods together and explain descriptors, a very important functionality that has widespread usage in python object oriented programming.Further Reading
- Python Essential Reference
- Python Data Model
-
Introduction to Python Generators
Generators are a very fascinating concept in python; generators have a wide range of applications from simple lazy evaluation to mind-blowing advanced concurrent execution of tasks (see David Beazley ). Before we dive into the fascinating world of python generators, we take a little detour to explain python iterators, a concept that I feel is integral to grasping generators.
Python Iterators
Simply put, an iterator in python is any python type that can be used with a
forloop. Python lists, tuples, dicts and sets are all examples of inbuilt iterators. One may ask, what about these types that make them iterators and is this a property of only inbuilt types?
These types are iterators because they implement the iterator protocol. Then again, What is the iterator protocol?
To answer this question requires another little detour. In python, there are some special object methods commonly referred to as magic methods. Just stay with me on this and believe what I say on faith at least till we get to object orientation in python. These methods are not normally called explicitly in code but are called implicitly by the python interpreter during code execution. A very familiar example of these magic methods is the__init__method that is roughly analogous to a constructor that is called during the initialization of a python object. Similar to the way__init__magic method has to be implemented for custom object initialization, the iterator protocol has a number of magic methods that need to be implemented by any object that wants to be used as an iterator. These methods are__iter__method that is called on initialization of an iterator. This should return an object that has anextmethod (In python 3 this is changed to__next__).nextmethod that is called whenever thenext()global function is invoked with the iterator as argument. The iteratornextmethod should return the next value for the iterable. When an iterator is used with a for loop, the for loop implicitly callsnext()on the iterator object. This method should raise aStopIterationexception when there is no longer any new value to return to signal the end of the iteration.
Any python class can be defined to act as an iterator so long as the iterator protocol is implemented. This is illustrated by implementing a simple iterator that returns Fibonacci numbers up to a given maximum value.
class Fib: def __init__(self, max): self.max = max def __iter__(self): self.a = 0 self.b = 1 return self def next(self): fib = self.a if fib > self.max: raise StopIteration self.a, self.b = self.b, self.a + self.b return fib >>>for i in Fib(10): print i 0 1 1 2 3 5 8We also go ahead and implement our own custom
rangefunction for looping through numbers. This simple implementation only loops from 0 upwards.class CustomRange: def __init__(self, max): self.max = max def __iter__(self): self.curr = 0 return self def next(self): numb = self.curr if self.curr >= self.max: raise StopIteration self.curr += 1 return numb for i in CustomRange(10): print i 0 1 2 3 4 5 6 7 8 9Back To Generators
Now, we have a basic understanding of iterators but how do they relate to generators. In short, python generators are iterators. PEP 255 that describes simple generators refers to generators by their full name, generator-iterators. Generators are used either by calling the next method on the generator object or using the generator object in a for loop.
In python, generator functions or just generators return generator objects. These generators are functions that contain the
yieldkey word. Rather than having to write every generators with the__iter__andnextwhich is pretty cumbersome, python provides theyieldkey word that provides an easy way for defining generators. For example the Fibonacci iterator can be recast as a generator using theyieldkey word as shown below:def fib(max): a, b = 0, 1 while a < max: yield a a, b = b, a + bThe use of the
yieldkey word greatly simplifies the creation of the generator.The
yieldkeywordThe
yieldkeyword is used in the following wayyield expression_listThe
yieldkeyword is central to python generator functions but what does thisyieldkeyword do? To understand theyieldkeyword, we contrast it with thereturnkey word; the other keyword that gives back control to the caller of a function. When a function that is executing encounters theyieldkeyword, it suspends execution at that point, saves its context and returns to the caller along with any value in the expression_list; when the caller invokes next on the object, execution of the function continues till another yield or return is encountered or end of function is reached. To quote PEP 255,If a yield statement is encountered, the state of the function is
frozen, and the value of expression_list is returned to .next()‘s
caller. By "frozen" we mean that all local state is retained,
including the current bindings of local variables, the instruction
pointer, and the internal evaluation stack: enough information is
saved so that the next time .next() is invoked, the function can
proceed exactly as if the yield statement were just another external
call.
On the other hand when a function encounters areturn statement, it returns to the caller along with any value proceeding thereturnstatement and the execution of such function is complete for all intent and purposes. One can think ofyieldas causing only a temporary interruption in the executions of a function.Python generators in action
Returning to the Fibonacci number function, if we wanted to generate all Fibonacci numbers up to a maximum value, the following non-generator snippet can be used to create the sequence
def fib(max): numbers = [] a, b = 0, 1 while a < max: numbers.append(a) a, b = b, a + b return numbersThe above snippets eagerly calculates all the numbers below max and returns the collection of such numbers using a single function call. On the other hand, using the Fibonacci generator to solve the same problem is a different ball game. We can either use it in a for loop and allow the for construct to implicitly initialize the generator and call
nexton the generator object or by explicitly initializing and callingnexton it. The values are returned one after the order by calling thenexton the generator.
The Fibonacci number generator is implemented using theyieldkeyword as shown below:def fib(max): a, b = 0, 1 while a < max: yield a a, b = b, a + bIn the following sections, we explicitly initialize the generator and make use of the `
nextfunction to get values from the generator. First, we initialize the generator object as shown below>>> gen = fib(10) >>> gen <generator object fib at 0x1069a6d20> >>>What has happened above is that when the generator is called, the arguments (max has a value of 10) are bound to names but the body of the function is not executed. Rather a generator-iterator object is returned as shown by the value of
gen. This object can then be used as an iterator. Note that it is the presence of theyieldkeyword is responsible for this.>>> next(gen) 0 >>> next(gen) 1 >>> next(gen) 1 >>> next(gen) 2 >>> next(gen) 3 >>> next(gen) 5 >>> next(gen) 8 >>> next(gen) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIterationNow when the
nextfunction is called with the generator object as argument, the generator function body executes till it encounters ayieldorreturnstatement or the end of the function body is reached. In the case of encountering a yield statement, the expression following the yield is returned to the caller and the state of the function is saved. Whennextis called on the Fibonacci generator objecta is bound to 0andb is bound to 1. Thewhilecondition is true so the first statement of the while loop is executed which happens to be ayieldexpression. This expression return the value ofawhich happens to be0to the caller and suspends at that point with all local context saved.
Think of this as eating your lunch partly and then storing it so as to continue eating later. You can keep eating till the lunch is exhausted and in the case of a generator, this is the function getting to a return statement or the end of function body. When thenextis called on the Fibonacci object again, execution resumes at thea, b = b, a+bline and continues executing as normal tillyieldis encountered again. This continues till the loop condition isfalseand aStopIterationexception is then raised to signal that there is no more data to generate.Generator Expressions
In python comprehensions, we discussed list comprehensions and how they are formed. One drawback with list comprehensions is that values are calculated all at once regardless of whether the values are needed at that time or not. This may sometimes consume an inordinate amount of computer memory. PEP 289 proposed the generator expression to resolve this and this proposal was accepted and added to the language. Generator expressions are like list comprehensions; the only difference is that the square brackets in list comprehensions are replaced by circular brackets that return. We contrast list comprehensions and generators below.
To generate a list of the square of number from 0 to 10 using list comprehensions the following is done:
>>> squares = [i**2 for i in range(10)] >>> squares [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]We could use a generator expression such as below in place of a list comprehension:
>>> squares = (i**2 for i in range(10)) >>> squares <generator object <genexpr> at 0x1069a6d70>We can then access the values of the generators using for loops or
nextmethods as shown below. Each value is then computed only on demand.>>> squares = (i**2 for i in range(10)) >>> for square in squares: print square 0 1 4 9 16 25 36 49 64 81Of what use are these generators?
Python generators provide the basis for lazy evaluation or calculation on demand in python. Lazy evaluation is an integral part of stream processing (processing of huge amount of data). For example, imagine we wanted an create an indeterminate amount of Fibonacci numbers, this would not be possible with a non-generator approach because we have to define the amount of numbers we need or go into an infinite loop. On the other hand, adopting the generator approach makes doing so trivial; we just have to call
nextto get the next Fibonacci number without bothering about where or when the stream of numbers ends. A more practical type of stream processing is handling large data files such as log files. Generators provide a space efficient method for such data processing as only parts of the file are handled at one given point in time. (David Beazley.Generators can also be used to replace callbacks. Rather than pass a callback to a function, the function can yield control to the caller when it needs to report to the caller. The caller can then invoke the function that would have been used as a callback. This frees the main function from having to know about the callback.
At a more advanced level, generators can also be used to implement concurrency (David Beazley). When a generator
yieldscontrol to the caller, the caller can then go ahead to call another generator simulating concurrency.The above listed are just a few of possible applications of python generators. In a follow up post, we will discuss new additions to python generators that enable a caller to send values to a generator as well as some advanced uses of generators.
Further Reading
PEP 289 – Generator Expressions
Generators: The Final Frontier by David Beazley
-
The Function
Python functions are either named or anonymous set of statements or expressions. In python, functions are first class objects. This means that there is no restriction on the usage of functions. Python functions can be used just like any other python value such as strings and numbers. Python functions have attributes that can be introspected on using the inbuilt python
dirfunction as shown below:def square(x): return x**2 >>> square <function square at 0x031AA230> >>> dir(square) ['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__doc__', '__format__', '__get__', '__getattribute__', '__globals__', '__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name'] >>>Some important function attributes include:
__doc__returns the documentation string for the given function.
def square(x): """return square of given number""" return x**2 >>> square.__doc__ 'return square of given number'__name__returns function name.
def square(x): """return square of given number""" return x**2 >>> square.func_name 'square'__module__returns the name of module function is defined in.
def square(x): """return square of given number""" return x**2 >>> square.__module__ '__main__'func_defaultsreturns a tuple of the default argument values. Default arguments are discussed later on.func_globalsreturns a reference to the dictionary that holds the function’s global variables.
def square(x): """return square of given number""" return x**2 >>> square.func_globals {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', 'square': <function square at 0x10f099c08>, '__doc__': None, '__package__': None}func_dictreturns the namespace supporting arbitrary function attributes.
def square(x): """return square of given number""" return x**2 >>> square.func_dict {}func_closurereturns tuple of cells that contain bindings for the function’s free variables. Closures are discussed later on.
Functions can be passed around as arguments to other functions. These functions that take other functions as argument are commonly referred to as higher order functions and these form a very important part of functional programming. A very good example of this higher order functions is the
mapfunction that takes a function and aniterableand applies the function to each item in theiterablereturning a new list. In the example below we illustrate this by passing thesquarefunction previously defined and aniterableof numbers to themapfunction.>>> map(square, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]Functions can also be defined inside other function code blocks and can be returned from other function calls.
def outer(): outer_var = "outer variable" def inner(): return outer_var return innerIn the above example, we define a function,
innerwithin another function,outerand return theinnerfunction when theouterfunction is executed. Functions can also be assigned to variables just like any other python object as shown below:def outer(): outer_var = "outer variable" def inner(): return outer_var return inner >>> func = outer() >>> func <function inner at 0x031AA270> >>>In the above example, the
outerfunction returns a function when called and this is assigned to the variablefunc. This variable can be called just like the returned function:>>> func() 'outer variable'Function Definitions
The
defkeyword is used to create user-defined functions. Functions definitions are executable statements.def square(x): return x**2In the
squarefunction above, when the module containing the function is loaded into python interpreter or if it is defined within the python REPL then the function definition statement which isdef square(x)is executed. This has some implications for default arguments that have mutable data structure as values; this will covered later on in this tutorial. The execution of a function definition binds the function name in the current local namespace (think of namespaces as name to value mappings that can also be nested. namespaces and scopes are covered in more detail in another tutorial) to a function object which is a wrapper around the executable code for the function. This function object contains a reference to the current global namespace which is the global namespace that is used when the function is called. The function definition does not execute the function body; this gets executed only when the function is called.Function Call Arguments
In addition to normal arguments, python functions support variable number of arguments. These variable number of arguments come in three flavors that are described below:
- Default Argument Values: This allows a user to define some default
values for function arguments. In this case, such a function can be called
with fewer arguments. Python uses the default supplied values for
arguments that are not supplied during function call. This example below is illustrative:def show_args(arg, def_arg=1, def_arg2=2): return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)The above function has been defined with a single normal positional
argument,argand two default arguments,def_arganddef_arg2. The above function can be called in any of the following ways below:- Supplying non-default positional argument values only; in this case the other arguments take on the supplied default values:
def show_args(arg, def_arg=1, def_arg2=2): return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2) >>> show_args("tranquility") 'arg=tranquility, def_arg=1, def_arg2=2' - Supplying values to override some default arguments in addition to the non-default positional arguments:
def show_args(arg, def_arg=1, def_arg2=2): return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2) >>> show_args("tranquility", "to Houston") 'arg=tranquility, def_arg=to Houston, def_arg2=2' - Supplying values for all arguments overriding even arguments with default values.
def show_args(arg, def_arg=1, def_arg2=2): return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2) >>> show_args("tranquility", "to Houston", "the eagle has landed") 'arg=tranquility, def_arg=to Houston, def_arg2=the eagle has landed'It is also very important to be careful when using mutable default data structures as default arguments. Function definitions get executed only once so these mutable data structures which are reference values are created once at definition time. This means that the same mutable data structure is used for all function calls as shown below:
def show_args_using_mutable_defaults(arg, def_arg=[]): def_arg.append("Hello World") return "arg={}, def_arg={}".format(arg, def_arg) >>> show_args_using_mutable_defaults("test") "arg=test, def_arg=['Hello World']" >>> show_args_using_mutable_defaults("test 2") "arg=test 2, def_arg=['Hello World', 'Hello World']"On every function call,
Hello Worldis added to thedef_arglist and after two function calls the default argument has two hello world strings. It is important to take note of this when using mutable default arguments as default values. The reason for this will become clearer when we discuss the python data model.
- Supplying non-default positional argument values only; in this case the other arguments take on the supplied default values:
- Keyword Argument: Functions can be called using keyword
arguments of the formkwarg=value. Akwargrefers to the name of arguments used in a function definition. Take the function defined below with positional non-default and default arguments.def show_args(arg, def_arg=1): return "arg={}, def_arg={}".format(arg, def_arg)To illustrate function calls with key word arguments, the following function can be called in any of the following ways:
show_args(arg="test", def_arg=3)show_args(test)show_args(arg="test")show_args("test", 3)In a function call, keyword arguments must not come before non-keyword arguments thus the following will fail:
show_args(def_arg=4)A function cannot supply duplicate values for an argument so the following is illegal:
show_args("test", arg="testing")In the above the argument
argis a positional argument so the value
testis assigned to it. Trying to assign to the keywordargagain
is an attempt at multiple assignment and this is illegal.All the keyword arguments passed must match one of the arguments accepted by the function and the order of keywords including non-optional arguments is not important so the following in which the order of argument is switched is legal:
show_args(def_arg="testing", arg="test") - Arbitrary Argument List: Python also supports defining functions that take arbitrary number of arguments that are passed to the function in a
tuple. An example of this from the python tutorial is given below:def write_multiple_items(file, separator, *args): file.write(separator.join(args))The arbitrary number of arguments must come after normal arguments; in this case, after the
fileandseparatorarguments. The following is an example of function calls to the above defined function:f = open("test.txt", "wb") write_multiple_items(f, " ", "one", "two", "three", "four", "five")The arguments
one two three four fiveare all bunched together into a tuple that can be accessed via theargsargument.
Unpacking Function Argument
Sometimes, we may have arguments for a function call either in a tuple, a list or a dict. These arguments can be unpacked into functions for function calls using
*or**operators. Consider the following function that takes two positional arguments and prints out the valuesdef print_args(a, b): print a print bIf we had the values we wanted to supply to the function in a list then we could unpack these values directly into the function as shown below:
>>> args = [1, 2] >>> print_args(*args) 1 2Similarly, when we have keywords, we can use
dictsto storekwarg to valuemapping and the**operator to unpack the keyword arguments to the functions as shown below:>>> def parrot(voltage, state=’a stiff’, action=’voom’): print "-- This parrot wouldn’t", action, print "if you put", voltage, "volts through it.", print "E’s", state, "!" >>> d = {"voltage": "four million", "state": "bleedin’ demised", "action": "VOOM"} >>> parrot(**d) >>> This parrot wouldn’t VOOM if you put four million volts through it. E’s bleedin’ demisedDefining Functions with
*and**Sometimes, when defining a function, we may not know before hand the number of arguments to expect. This leads to function definitions of the following signature:
show_args(arg, *args, **kwargs)The
*argsargument represents an unknown length of sequence of positional arguments while**kwargsrepresents a dict of keyword name value mappings which may contain any amount of keyword name value mapping. The*argsmust come before**kwargsin the function definition. The following illustrates this:def show_args(arg, *args, **kwargs): print arg for item in args: print args for key, value in kwargs: print key, value >>> args = [1, 2, 3, 4] >>> kwargs = dict(name='testing', age=24, year=2014) >>> show_args("hey", *args, **kwargs) hey 1 2 3 4 age 24 name testing year 2014The normal argument must be supplied to the function but the
*argsand**kwargsare optional as shown below:>>> show_args("hey", *args, **kwargs) heyAt function call the normal argument is supplied normally while the optional arguments are unpacked into the function call.
Anonymous Functions
Python also has support for anonymous functions. These functions are created using the
lambdakeyword. Lambda expressions in python are of the form:lambda_expr ::= "lambda" [parameter_list]: expressionLambda expressions return function objects after evaluation and have same attributes as named functions. Lambda expressions are normally only used for very simple functions in python as shown below.
>>> square = lambda x: x**2 >>> for i in range(10): square(i) 0 1 4 9 16 25 36 49 64 81 >>>The above lambda expression is equivalent to the following named function:
def square(x): return x**2Nested functions and Closures
Function definitions within a function creates nested functions just as shown below:
```python def outer(): outer_var = "outer variable" def inner(): return outer_var return inner ```In this type of function definition, the function
inneris only in scope inside the functionouter, so it is most often useful when the inner function is being returned (moving it to the outer scope) or when it is being passed into another function. In nested functions such as in the above, a new instance of the nested function is created on each call to outer function. That is because during each execution of the outer function the definition of the new inner function is executed but the body is not executed.A nested function has access to the environment in which it was created. This is a direct result of the semantics of python function definition. A result is that a variable defined in the outer function can be referenced in the inner function even after the outer functions has finished execution.
def outer(): outer_var = "outer variable" def inner(): return outer_var return inner >>> x = outer() >>> x <function inner at 0x0273BCF0> >>> x() 'outer variable'When nested functions reference variables from outer functions we say the nested function is closed over the referenced variable. We can use one of the special attribute of function objects,
__closure__to access the closed variables as shown below:>>> cl = x.__closure__ >>> cl (<cell at 0x029E4470: str object at 0x02A0FD90>,) >>> cl[0].cell_contents 'outer variable'Closures in python have a quirky behavior. In python 2.x and below, variables that point to immutable types such as string and numbers cannot be rebound within a closure. The example below illustrates this
def counter(): count = 0 def c(): count += 1 return count return c >>> c = counter() >>> c() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 4, in c UnboundLocalError: local variable 'count' referenced before assignmentA rather wonky solution to this is to make use of a mutable type to capture the closure as shown below:
def counter(): count = [0] def c(): count[0] += 1 return count[0] return c >>> c = counter() >>> c() 1 >>> c() 2 >>> c() 3Python 3 introduces the
nonlocalkey word that can be used to fix this closure scoping issue as shown below. In the tutorial on namespaces, we describe these quirks in more details.def counter(): count = 0 def c(): nonlocal count count += 1 return count return cClosures can be used for maintaining states (isn’t that what classes are for) and for some simple cases provide a more succinct and readable solution than classes. We use a logging example curled from tech_pro to illustrate this. Imagine an extremely trivial logging API using class-based object orientation that can log at different levels:
class Log: def __init__(self, level): self._level = level def __call__(self, message): print("{}: {}".format(self._level, message)) log_info = Log("info") log_warning = Log("warning") log_error = Log("error")This same functionality can be implemented with closures as shown below:
def make_log(level): def _(message): print("{}: {}".format(level, message)) return _ log_info = make_log("info") log_warning = make_log("warning") log_error = make_log("error")