I can't find a definitive answer for this. As far as I know, you can't have multiple __init__
functions in a Python class. So how do I solve this problem?
Suppose I have a class called Cheese
with the number_of_holes
property. How can I have two ways of creating cheese objects...
One that takes a number of holes like this: parmesan = Cheese(num_holes = 15). And one that takes no arguments and just randomizes the number_of_holes property: gouda = Cheese().
I can think of only one way to do this, but this seems clunky:
class Cheese():
def __init__(self, num_holes = 0):
if (num_holes == 0):
# Randomize number_of_holes
else:
number_of_holes = num_holes
What do you say? Is there another way?
Actually None
is much better for "magic" values:
class Cheese():
def __init__(self, num_holes = None):
if num_holes is None:
...
Now if you want complete freedom of adding more parameters:
class Cheese():
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
To better explain the concept of *args
and **kwargs
(you can actually change these names):
def f(*args, **kwargs):
print 'args: ', args, ' kwargs: ', kwargs
>>> f('a')
args: ('a',) kwargs: {}
>>> f(ar='a')
args: () kwargs: {'ar': 'a'}
>>> f(1,2,param=3)
args: (1, 2) kwargs: {'param': 3}
http://docs.python.org/reference/expressions.html#calls
Using num_holes=None
as the default is fine if you are going to have just __init__
.
If you want multiple, independent "constructors", you can provide these as class methods. These are usually called factory methods. In this case you could have the default for num_holes
be 0
.
class Cheese(object):
def __init__(self, num_holes=0):
"defaults to a solid cheese"
self.number_of_holes = num_holes
@classmethod
def random(cls):
return cls(randint(0, 100))
@classmethod
def slightly_holey(cls):
return cls(randint(0, 33))
@classmethod
def very_holey(cls):
return cls(randint(66, 100))
Now create object like this:
gouda = Cheese()
emmentaler = Cheese.random()
leerdammer = Cheese.slightly_holey()
@classmethod
s are the pythonic way of implementing multiple contstructors.
self
. Then there are class methods (using @classmethod
) which have a reference to the class object as cls
. An finally there are static methods (declared with @staticmethod
) which have neither of those references. Static methods are just like functions at module level, except they live in the class' name space.
@abstractmethod
and @classmethod
on the same factory function is possible and is built into the language. I would also argue that this approach is more explicit, which goes with The Zen of Python.
if
s in __init__()
, the trick is to have each of the unique factory methods handle their own unique aspects of initialization, and have __init__()
accept only the fundamental pieces of data that define an instance. For example, Cheese
might have attributes volume
and average_hole_radius
in addition to number_of_holes
. __init__()
would accept these three values. Then you could have a class method with_density()
that randomly chooses the fundamental attributes to match a given density, subsequently passing them on to __init__()
.
One should definitely prefer the solutions already posted, but since no one mentioned this solution yet, I think it is worth mentioning for completeness.
The @classmethod
approach can be modified to provide an alternative constructor which does not invoke the default constructor (__init__
). Instead, an instance is created using __new__
.
This could be used if the type of initialization cannot be selected based on the type of the constructor argument, and the constructors do not share code.
Example:
class MyClass(set):
def __init__(self, filename):
self._value = load_from_file(filename)
@classmethod
def from_somewhere(cls, somename):
obj = cls.__new__(cls) # Does not call __init__
super(MyClass, obj).__init__() # Don't forget to call any polymorphic base class initializers
obj._value = load_from_somewhere(somename)
return obj
__init__
's arguments. However, could you provide some references, please, that this method is somehow officially approved or supported? How safe and reliable is it to call directly __new__
method?
super
otherwise this won't work in cooperative multiple inheritance, so I added the line to your answer.
All of these answers are excellent if you want to use optional parameters, but another Pythonic possibility is to use a classmethod to generate a factory-style pseudo-constructor:
def __init__(self, num_holes):
# do stuff with the number
@classmethod
def fromRandom(cls):
return cls( # some-random-number )
Why do you think your solution is "clunky"? Personally I would prefer one constructor with default values over multiple overloaded constructors in situations like yours (Python does not support method overloading anyway):
def __init__(self, num_holes=None):
if num_holes is None:
# Construct a gouda
else:
# custom cheese
# common initialization
For really complex cases with lots of different constructors, it might be cleaner to use different factory functions instead:
@classmethod
def create_gouda(cls):
c = Cheese()
# ...
return c
@classmethod
def create_cheddar(cls):
# ...
In your cheese example you might want to use a Gouda subclass of Cheese though...
Those are good ideas for your implementation, but if you are presenting a cheese making interface to a user. They don't care how many holes the cheese has or what internals go into making cheese. The user of your code just wants "gouda" or "parmesean" right?
So why not do this:
# cheese_user.py
from cheeses import make_gouda, make_parmesean
gouda = make_gouda()
paremesean = make_parmesean()
And then you can use any of the methods above to actually implement the functions:
# cheeses.py
class Cheese(object):
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
def make_gouda():
return Cheese()
def make_paremesean():
return Cheese(num_holes=15)
This is a good encapsulation technique, and I think it is more Pythonic. To me this way of doing things fits more in line more with duck typing. You are simply asking for a gouda object and you don't really care what class it is.
make_gouda, make_parmesan
should be classmethods of class Cheese
Overview
For the specific cheese example, I agree with many of the other answers about using default values to signal random initialization or to use a static factory method. However, there may also be related scenarios that you had in mind where there is value in having alternative, concise ways of calling the constructor without hurting the quality of parameter names or type information.
Since Python 3.8 and functools.singledispatchmethod
can help accomplish this in many cases (and the more flexible multimethod
can apply in even more scenarios). (This related post describes how one could accomplish the same in Python 3.4 without a library.) I haven't seen examples in the documentation for either of these that specifically shows overloading __init__
as you ask about, but it appears that the same principles for overloading any member method apply (as shown below).
"Single dispatch" (available in the standard library) requires that there be at least one positional parameter and that the type of the first argument be sufficient to distinguish among the possible overloaded options. For the specific Cheese example, this doesn't hold since you wanted random holes when no parameters were given, but multidispatch
does support the very same syntax and can be used as long as each method version can be distinguish based on the number and type of all arguments together.
Example
Here is an example of how to use either method (some of the details are in order to please mypy which was my goal when I first put this together):
from functools import singledispatchmethod as overload
# or the following more flexible method after `pip install multimethod`
# from multimethod import multidispatch as overload
class MyClass:
@overload # type: ignore[misc]
def __init__(self, a: int = 0, b: str = 'default'):
self.a = a
self.b = b
@__init__.register
def _from_str(self, b: str, a: int = 0):
self.__init__(a, b) # type: ignore[misc]
def __repr__(self) -> str:
return f"({self.a}, {self.b})"
print([
MyClass(1, "test"),
MyClass("test", 1),
MyClass("test"),
MyClass(1, b="test"),
MyClass("test", a=1),
MyClass("test"),
MyClass(1),
# MyClass(), # `multidispatch` version handles these 3, too.
# MyClass(a=1, b="test"),
# MyClass(b="test", a=1),
])
Output:
[(1, test), (1, test), (0, test), (1, test), (1, test), (0, test), (1, default)]
Notes:
I wouldn't usually make the alias called overload, but it helped make the diff between using the two methods just a matter of which import you use.
The # type: ignore[misc] comments are not necessary to run, but I put them in there to please mypy which doesn't like decorating __init__ nor calling __init__ directly.
If you are new to the decorator syntax, realize that putting @overload before the definition of __init__ is just sugar for __init__ = overload(the original definition of __init__). In this case, overload is a class so the resulting __init__ is an object that has a __call__ method so that it looks like a function but that also has a .register method which is being called later to add another overloaded version of __init__. This is a bit messy, but it please mypy becuase there are no method names being defined twice. If you don't care about mypy and are planning to use the external library anyway, multimethod also has simpler alternative ways of specifying overloaded versions.
Defining __repr__ is simply there to make the printed output meaningful (you don't need it in general).
Notice that multidispatch is able to handle three additional input combinations that don't have any positional parameters.
Use num_holes=None
as a default, instead. Then check for whether num_holes is None
, and if so, randomize. That's what I generally see, anyway.
More radically different construction methods may warrant a classmethod that returns an instance of cls
.
The best answer is the one above about default arguments, but I had fun writing this, and it certainly does fit the bill for "multiple constructors". Use at your own risk.
What about the new method.
"Typical implementations create a new instance of the class by invoking the superclass’s new() method using super(currentclass, cls).new(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it."
So you can have the new method modify your class definition by attaching the appropriate constructor method.
class Cheese(object):
def __new__(cls, *args, **kwargs):
obj = super(Cheese, cls).__new__(cls)
num_holes = kwargs.get('num_holes', random_holes())
if num_holes == 0:
cls.__init__ = cls.foomethod
else:
cls.__init__ = cls.barmethod
return obj
def foomethod(self, *args, **kwargs):
print "foomethod called as __init__ for Cheese"
def barmethod(self, *args, **kwargs):
print "barmethod called as __init__ for Cheese"
if __name__ == "__main__":
parm = Cheese(num_holes=5)
Cheese(0)
and one with Cheese(1)
. It's possible that thread 1 might run cls.__init__ = cls.foomethod
, but then thread 2 might run cls.__init__ = cls.barmethod
before thread 1 gets any further. Both threads will then end up calling barmethod
, which isn't what you want.
I'd use inheritance. Especially if there are going to be more differences than number of holes. Especially if Gouda will need to have different set of members then Parmesan.
class Gouda(Cheese):
def __init__(self):
super(Gouda).__init__(num_holes=10)
class Parmesan(Cheese):
def __init__(self):
super(Parmesan).__init__(num_holes=15)
Since my initial answer was criticised on the basis that my special-purpose constructors did not call the (unique) default constructor, I post here a modified version that honours the wishes that all constructors shall call the default one:
class Cheese:
def __init__(self, *args, _initialiser="_default_init", **kwargs):
"""A multi-initialiser.
"""
getattr(self, _initialiser)(*args, **kwargs)
def _default_init(self, ...):
"""A user-friendly smart or general-purpose initialiser.
"""
...
def _init_parmesan(self, ...):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gouda(self, ...):
"""A special initialiser for Gouda cheese.
"""
...
@classmethod
def make_parmesan(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_parmesan")
@classmethod
def make_gouda(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_gouda")
__init__
that can handle initializing Cheese
without having to know about special kinds of cheeses. Second, you define a class method that generates the appropriate arguments to the generic __init__
for certain special cases. Here, you are basically reinventing parts of inheritance.
This is how I solved it for a YearQuarter
class I had to create. I created an __init__
which is very tolerant to a wide variety of input.
You use it like this:
>>> from datetime import date
>>> temp1 = YearQuarter(year=2017, month=12)
>>> print temp1
2017-Q4
>>> temp2 = YearQuarter(temp1)
>>> print temp2
2017-Q4
>>> temp3 = YearQuarter((2017, 6))
>>> print temp3
2017-Q2
>>> temp4 = YearQuarter(date(2017, 1, 18))
>>> print temp4
2017-Q1
>>> temp5 = YearQuarter(year=2017, quarter = 3)
>>> print temp5
2017-Q3
And this is how the __init__
and the rest of the class looks like:
import datetime
class YearQuarter:
def __init__(self, *args, **kwargs):
if len(args) == 1:
[x] = args
if isinstance(x, datetime.date):
self._year = int(x.year)
self._quarter = (int(x.month) + 2) / 3
elif isinstance(x, tuple):
year, month = x
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif isinstance(x, YearQuarter):
self._year = x._year
self._quarter = x._quarter
elif len(args) == 2:
year, month = args
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif kwargs:
self._year = int(kwargs["year"])
if "quarter" in kwargs:
quarter = int(kwargs["quarter"])
if 1 <= quarter <= 4:
self._quarter = quarter
else:
raise ValueError
elif "month" in kwargs:
month = int(kwargs["month"])
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
def __str__(self):
return '{0}-Q{1}'.format(self._year, self._quarter)
__init__(self, obj)
I test inside __init__
with if str(obj.__class__.__name__) == 'NameOfMyClass': ... elif etc.
.
__init__
should take a year and a quarter directly, rather than a single value of unknown type. A class method from_date
can handle extracting a year and quarter from a datetime.date
value, then calling YearQuarter(y, q)
. You could define a similar class method from_tuple
, but that hardly seems worth doing since you could simply call YearQuarter(*t)
.
__init__
shouldn't responsible for analyzing every possible set of values you might use to create an instance. def __init__(self, year, quarter): self._year = year; self._quarter = quarter
: that's it (though may be with some range checking on quarter
). Other class methods handle the job of mapping a different argument or arguments to a year and a quarter that can be passed to __init__
.
from_year_month
takes a month m
, maps it to a quarter q
, then calls YearQuarter(y, q)
. from_date
extracts the year and the month from the date
instance, then calls YearQuarter._from_year_month
. No repetition, and each method is responsible for one specific way of generating a year and a quarter to pass to __init__
.
class Cheese:
def __init__(self, *args, **kwargs):
"""A user-friendly initialiser for the general-purpose constructor.
"""
...
def _init_parmesan(self, *args, **kwargs):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gauda(self, *args, **kwargs):
"""A special initialiser for Gauda cheese.
"""
...
@classmethod
def make_parmesan(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_parmesan(*args, **kwargs)
return new
@classmethod
def make_gauda(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_gauda(*args, **kwargs)
return new
__init__
method, and the other class methods either call it as-is (cleanest) or handle special initialization actions via any helper classmethods and setters you need (ideally none).
__init__
method when I have multiple constructors with different initialisation routines. I do not see why someone would want it. "the other class methods either call it as-is" -- call what? The __init__
method? That would be strange to call __init__
explicitely IMO.
_init...
methods (see other answers on this question.) Worse still, in this case you don't even need to: you haven't shown how the code for _init_parmesan, _init_gouda
differ, so there is zero reason not to common-case them. Anyway, the Pythonic way to do that is to supply non-default args to *args or **kwargs (e.g. Cheese(..., type='gouda'...)
, or if that can't handle everything, put the general code in __init__
and the less-commonly-used code in a classmethod make_whatever...
and have it cal setters
__init__
, if the routines are completely independent, i will call them _init_from_foo
, _init_from_bar
, etc, and call them from __init__
after dispatching by isinstance
or by other tests.
I do not see a straightforward answer with an example yet. The idea is simple:
use __init__ as the "basic" constructor as python only allows one __init__ method
use @classmethod to create any other constructors and call the basic constructor
Here is a new try.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
@classmethod
def fromBirthYear(cls, name, birthYear):
return cls(name, date.today().year - birthYear)
Usage:
p = Person('tim', age=18)
p = Person.fromBirthYear('tim', birthYear=2004)
Success story sharing
kwargs
stands for keyword arguments (seems logic once you know it). :)*args
and**kwargs
are an overkill. At most constructors, you want to know what your arguments are.