Int and float different behavior with TypeError "object layout differs from"

Minimal repro to illustrate my question:

results in TypeError: __class__ assignment: 'B' object layout differs from 'A'

The very same code for float won’t throw any error:

I also found out that if I add __slots__ = () attribute to both A and B definitions no errors will appear in the __class__ reassignment.

Is this different behavior for int and float an expected one?

P.S. Real code is in the autoreload extension for jupyter, where it tries to reload the changed class and if it’s a child of int it throws this error.

Python version 3.10.8.

  • Python Home
  • Documentation
  • Developer's Guide
  • Random Issue
  • Issues with patch
  • Easy issues
  • Login Remember me?
  • Lost your login?
  • Committer List
  • Tracker Documentation
  • Tracker Development
  • Report Tracker Problem

typeerror __class__ assignment

This issue tracker has been migrated to GitHub , and is currently read-only . For more information, see the GitHub FAQs in the Python's Developer Guide.

This issue has been migrated to GitHub: https://github.com/python/cpython/issues/48850

Created on 2008-12-08 21:25 by terry.reedy , last changed 2022-04-11 14:56 by admin . This issue is now closed .

Logo

  • Search in titles only Search in Python only Search
  • Advanced Search

assigning to __class__ for an extension type: Is it still possible?

  • Latest Activity
  • Time All Time Today Last Week Last Month
  • Show All Discussions only Photos only Videos only Links only Polls only Events only

gregory.lielens@gmail.com

Python Enhancement Proposals

  • Python »
  • PEP Index »

PEP 252 – Making Types Look More Like Classes

Introduction, introspection apis, specification of the class-based introspection api, specification of the attribute descriptor api, static methods and class methods, backwards compatibility, warnings and errors, implementation.

This PEP proposes changes to the introspection API for types that makes them look more like classes, and their instances more like class instances. For example, type(x) will be equivalent to x.__class__ for most built-in types. When C is x.__class__ , x.meth(a) will generally be equivalent to C.meth(x, a) , and C.__dict__ contains x’s methods and other attributes.

This PEP also introduces a new approach to specifying attributes, using attribute descriptors, or descriptors for short. Descriptors unify and generalize several different common mechanisms used for describing attributes: a descriptor can describe a method, a typed field in the object structure, or a generalized attribute represented by getter and setter functions.

Based on the generalized descriptor API, this PEP also introduces a way to declare class methods and static methods.

[Editor’s note: the ideas described in this PEP have been incorporated into Python. The PEP no longer accurately describes the implementation.]

One of Python’s oldest language warts is the difference between classes and types. For example, you can’t directly subclass the dictionary type, and the introspection interface for finding out what methods and instance variables an object has is different for types and for classes.

Healing the class/type split is a big effort, because it affects many aspects of how Python is implemented. This PEP concerns itself with making the introspection API for types look the same as that for classes. Other PEPs will propose making classes look more like types, and subclassing from built-in types; these topics are not on the table for this PEP.

Introspection concerns itself with finding out what attributes an object has. Python’s very general getattr/setattr API makes it impossible to guarantee that there always is a way to get a list of all attributes supported by a specific object, but in practice two conventions have appeared that together work for almost all objects. I’ll call them the class-based introspection API and the type-based introspection API; class API and type API for short.

The class-based introspection API is used primarily for class instances; it is also used by Jim Fulton’s ExtensionClasses. It assumes that all data attributes of an object x are stored in the dictionary x.__dict__ , and that all methods and class variables can be found by inspection of x’s class, written as x.__class__ . Classes have a __dict__ attribute, which yields a dictionary containing methods and class variables defined by the class itself, and a __bases__ attribute, which is a tuple of base classes that must be inspected recursively. Some assumptions here are:

  • attributes defined in the instance dict override attributes defined by the object’s class;
  • attributes defined in a derived class override attributes defined in a base class;
  • attributes in an earlier base class (meaning occurring earlier in __bases__ ) override attributes in a later base class.

(The last two rules together are often summarized as the left-to-right, depth-first rule for attribute search. This is the classic Python attribute lookup rule. Note that PEP 253 will propose to change the attribute lookup order, and if accepted, this PEP will follow suit.)

The type-based introspection API is supported in one form or another by most built-in objects. It uses two special attributes, __members__ and __methods__ . The __methods__ attribute, if present, is a list of method names supported by the object. The __members__ attribute, if present, is a list of data attribute names supported by the object.

The type API is sometimes combined with a __dict__ that works the same as for instances (for example for function objects in Python 2.1, f.__dict__ contains f’s dynamic attributes, while f.__members__ lists the names of f’s statically defined attributes).

Some caution must be exercised: some objects don’t list their “intrinsic” attributes (like __dict__ and __doc__ ) in __members__ , while others do; sometimes attribute names occur both in __members__ or __methods__ and as keys in __dict__ , in which case it’s anybody’s guess whether the value found in __dict__ is used or not.

The type API has never been carefully specified. It is part of Python folklore, and most third party extensions support it because they follow examples that support it. Also, any type that uses Py_FindMethod() and/or PyMember_Get() in its tp_getattr handler supports it, because these two functions special-case the attribute names __methods__ and __members__ , respectively.

Jim Fulton’s ExtensionClasses ignore the type API, and instead emulate the class API, which is more powerful. In this PEP, I propose to phase out the type API in favor of supporting the class API for all types.

One argument in favor of the class API is that it doesn’t require you to create an instance in order to find out which attributes a type supports; this in turn is useful for documentation processors. For example, the socket module exports the SocketType object, but this currently doesn’t tell us what methods are defined on socket objects. Using the class API, SocketType would show exactly what the methods for socket objects are, and we can even extract their docstrings, without creating a socket. (Since this is a C extension module, the source-scanning approach to docstring extraction isn’t feasible in this case.)

Objects may have two kinds of attributes: static and dynamic. The names and sometimes other properties of static attributes are knowable by inspection of the object’s type or class, which is accessible through obj.__class__ or type(obj) . (I’m using type and class interchangeably; a clumsy but descriptive term that fits both is “meta-object”.)

(XXX static and dynamic are not great terms to use here, because “static” attributes may actually behave quite dynamically, and because they have nothing to do with static class members in C++ or Java. Barry suggests to use immutable and mutable instead, but those words already have precise and different meanings in slightly different contexts, so I think that would still be confusing.)

Examples of dynamic attributes are instance variables of class instances, module attributes, etc. Examples of static attributes are the methods of built-in objects like lists and dictionaries, and the attributes of frame and code objects ( f.f_code , c.co_filename , etc.). When an object with dynamic attributes exposes these through its __dict__ attribute, __dict__ is a static attribute.

The names and values of dynamic properties are typically stored in a dictionary, and this dictionary is typically accessible as obj.__dict__ . The rest of this specification is more concerned with discovering the names and properties of static attributes than with dynamic attributes; the latter are easily discovered by inspection of obj.__dict__ .

In the discussion below, I distinguish two kinds of objects: regular objects (like lists, ints, functions) and meta-objects. Types and classes are meta-objects. Meta-objects are also regular objects, but we’re mostly interested in them because they are referenced by the __class__ attribute of regular objects (or by the __bases__ attribute of other meta-objects).

The class introspection API consists of the following elements:

  • the __class__ and __dict__ attributes on regular objects;
  • the __bases__ and __dict__ attributes on meta-objects;
  • precedence rules;
  • attribute descriptors.

Together, these not only tell us about all attributes defined by a meta-object, but they also help us calculate the value of a specific attribute of a given object.

A regular object may have a __dict__ attribute. If it does, this should be a mapping (not necessarily a dictionary) supporting at least __getitem__() , keys() , and has_key() . This gives the dynamic attributes of the object. The keys in the mapping give attribute names, and the corresponding values give their values.

Typically, the value of an attribute with a given name is the same object as the value corresponding to that name as a key in the __dict__ . In other words, obj.__dict__['spam'] is obj.spam . (But see the precedence rules below; a static attribute with the same name may override the dictionary item.)

A regular object usually has a __class__ attribute. If it does, this references a meta-object. A meta-object can define static attributes for the regular object whose __class__ it is. This is normally done through the following mechanism:

A meta-object may have a __dict__ attribute, of the same form as the __dict__ attribute for regular objects (a mapping but not necessarily a dictionary). If it does, the keys of the meta-object’s __dict__ are names of static attributes for the corresponding regular object. The values are attribute descriptors; we’ll explain these later. An unbound method is a special case of an attribute descriptor.

Because a meta-object is also a regular object, the items in a meta-object’s __dict__ correspond to attributes of the meta-object; however, some transformation may be applied, and bases (see below) may define additional dynamic attributes. In other words, mobj.spam is not always mobj.__dict__['spam'] . (This rule contains a loophole because for classes, if C.__dict__['spam'] is a function, C.spam is an unbound method object.)

A meta-object may have a __bases__ attribute. If it does, this should be a sequence (not necessarily a tuple) of other meta-objects, the bases. An absent __bases__ is equivalent to an empty sequence of bases. There must never be a cycle in the relationship between meta-objects defined by __bases__ attributes; in other words, the __bases__ attributes define a directed acyclic graph, with arcs pointing from derived meta-objects to their base meta-objects. (It is not necessarily a tree, since multiple classes can have the same base class.) The __dict__ attributes of a meta-object in the inheritance graph supply attribute descriptors for the regular object whose __class__ attribute points to the root of the inheritance tree (which is not the same as the root of the inheritance hierarchy – rather more the opposite, at the bottom given how inheritance trees are typically drawn). Descriptors are first searched in the dictionary of the root meta-object, then in its bases, according to a precedence rule (see the next paragraph).

When two meta-objects in the inheritance graph for a given regular object both define an attribute descriptor with the same name, the search order is up to the meta-object. This allows different meta-objects to define different search orders. In particular, classic classes use the old left-to-right depth-first rule, while new-style classes use a more advanced rule (see the section on method resolution order in PEP 253 ).

When a dynamic attribute (one defined in a regular object’s __dict__ ) has the same name as a static attribute (one defined by a meta-object in the inheritance graph rooted at the regular object’s __class__ ), the static attribute has precedence if it is a descriptor that defines a __set__ method (see below); otherwise (if there is no __set__ method) the dynamic attribute has precedence. In other words, for data attributes (those with a __set__ method), the static definition overrides the dynamic definition, but for other attributes, dynamic overrides static.

Rationale: we can’t have a simple rule like “static overrides dynamic” or “dynamic overrides static”, because some static attributes indeed override dynamic attributes; for example, a key ‘__class__’ in an instance’s __dict__ is ignored in favor of the statically defined __class__ pointer, but on the other hand most keys in inst.__dict__ override attributes defined in inst.__class__ . Presence of a __set__ method on a descriptor indicates that this is a data descriptor. (Even read-only data descriptors have a __set__ method: it always raises an exception.) Absence of a __set__ method on a descriptor indicates that the descriptor isn’t interested in intercepting assignment, and then the classic rule applies: an instance variable with the same name as a method hides the method until it is deleted.

This is where it gets interesting – and messy. Attribute descriptors (descriptors for short) are stored in the meta-object’s __dict__ (or in the __dict__ of one of its ancestors), and have two uses: a descriptor can be used to get or set the corresponding attribute value on the (regular, non-meta) object, and it has an additional interface that describes the attribute for documentation and introspection purposes.

There is little prior art in Python for designing the descriptor’s interface, neither for getting/setting the value nor for describing the attribute otherwise, except some trivial properties (it’s reasonable to assume that __name__ and __doc__ should be the attribute’s name and docstring). I will propose such an API below.

If an object found in the meta-object’s __dict__ is not an attribute descriptor, backward compatibility dictates certain minimal semantics. This basically means that if it is a Python function or an unbound method, the attribute is a method; otherwise, it is the default value for a dynamic data attribute. Backwards compatibility also dictates that (in the absence of a __setattr__ method) it is legal to assign to an attribute corresponding to a method, and that this creates a data attribute shadowing the method for this particular instance. However, these semantics are only required for backwards compatibility with regular classes.

The introspection API is a read-only API. We don’t define the effect of assignment to any of the special attributes ( __dict__ , __class__ and __bases__ ), nor the effect of assignment to the items of a __dict__ . Generally, such assignments should be considered off-limits. A future PEP may define some semantics for some such assignments. (Especially because currently instances support assignment to __class__ and __dict__ , and classes support assignment to __bases__ and __dict__ .)

Attribute descriptors may have the following attributes. In the examples, x is an object, C is x.__class__ , x.meth() is a method, and x.ivar is a data attribute or instance variable. All attributes are optional – a specific attribute may or may not be present on a given descriptor. An absent attribute means that the corresponding information is not available or the corresponding functionality is not implemented.

  • __name__ : the attribute name. Because of aliasing and renaming, the attribute may (additionally or exclusively) be known under a different name, but this is the name under which it was born. Example: C.meth.__name__ == 'meth' .
  • __doc__ : the attribute’s documentation string. This may be None.
  • __objclass__ : the class that declared this attribute. The descriptor only applies to objects that are instances of this class (this includes instances of its subclasses). Example: C.meth.__objclass__ is C .
  • __get__() : a function callable with one or two arguments that retrieves the attribute value from an object. This is also referred to as a “binding” operation, because it may return a “bound method” object in the case of method descriptors. The first argument, X, is the object from which the attribute must be retrieved or to which it must be bound. When X is None, the optional second argument, T, should be meta-object and the binding operation may return an unbound method restricted to instances of T. When both X and T are specified, X should be an instance of T. Exactly what is returned by the binding operation depends on the semantics of the descriptor; for example, static methods and class methods (see below) ignore the instance and bind to the type instead.
  • __set__() : a function of two arguments that sets the attribute value on the object. If the attribute is read-only, this method may raise a TypeError or AttributeError exception (both are allowed, because both are historically found for undefined or unsettable attributes). Example: C.ivar.set(x, y) ~~ x.ivar = y .

The descriptor API makes it possible to add static methods and class methods. Static methods are easy to describe: they behave pretty much like static methods in C++ or Java. Here’s an example:

Both the call C.foo(1, 2) and the call c.foo(1, 2) call foo() with two arguments, and print “staticmethod 1 2”. No “self” is declared in the definition of foo() , and no instance is required in the call.

The line “foo = staticmethod(foo)” in the class statement is the crucial element: this makes foo() a static method. The built-in staticmethod() wraps its function argument in a special kind of descriptor whose __get__() method returns the original function unchanged. Without this, the __get__() method of standard function objects would have created a bound method object for ‘c.foo’ and an unbound method object for ‘C.foo’.

(XXX Barry suggests to use “sharedmethod” instead of “staticmethod”, because the word static is being overloaded in so many ways already. But I’m not sure if shared conveys the right meaning.)

Class methods use a similar pattern to declare methods that receive an implicit first argument that is the class for which they are invoked. This has no C++ or Java equivalent, and is not quite the same as what class methods are in Smalltalk, but may serve a similar purpose. According to Armin Rigo, they are similar to “virtual class methods” in Borland Pascal dialect Delphi. (Python also has real metaclasses, and perhaps methods defined in a metaclass have more right to the name “class method”; but I expect that most programmers won’t be using metaclasses.) Here’s an example:

Both the call C.foo(1) and the call c.foo(1) end up calling foo() with two arguments, and print “classmethod __main__.C 1”. The first argument of foo() is implied, and it is the class, even if the method was invoked via an instance. Now let’s continue the example:

This prints “classmethod __main__.D 1” both times; in other words, the class passed as the first argument of foo() is the class involved in the call, not the class involved in the definition of foo() .

But notice this:

In this example, the call to C.foo() from E.foo() will see class C as its first argument, not class E. This is to be expected, since the call specifies the class C. But it stresses the difference between these class methods and methods defined in metaclasses, where an upcall to a metamethod would pass the target class as an explicit first argument. (If you don’t understand this, don’t worry, you’re not alone.) Note that calling cls.foo(y) would be a mistake – it would cause infinite recursion. Also note that you can’t specify an explicit ‘cls’ argument to a class method. If you want this (e.g. the __new__ method in PEP 253 requires this), use a static method with a class as its explicit first argument instead.

XXX The following is VERY rough text that I wrote with a different audience in mind; I’ll have to go through this to edit it more. XXX It also doesn’t go into enough detail for the C API.

A built-in type can declare special data attributes in two ways: using a struct memberlist (defined in structmember.h) or a struct getsetlist (defined in descrobject.h). The struct memberlist is an old mechanism put to new use: each attribute has a descriptor record including its name, an enum giving its type (various C types are supported as well as PyObject * ), an offset from the start of the instance, and a read-only flag.

The struct getsetlist mechanism is new, and intended for cases that don’t fit in that mold, because they either require additional checking, or are plain calculated attributes. Each attribute here has a name, a getter C function pointer, a setter C function pointer, and a context pointer. The function pointers are optional, so that for example setting the setter function pointer to NULL makes a read-only attribute. The context pointer is intended to pass auxiliary information to generic getter/setter functions, but I haven’t found a need for this yet.

Note that there is also a similar mechanism to declare built-in methods: these are PyMethodDef structures, which contain a name and a C function pointer (and some flags for the calling convention).

Traditionally, built-in types have had to define their own tp_getattro and tp_setattro slot functions to make these attribute definitions work ( PyMethodDef and struct memberlist are quite old). There are convenience functions that take an array of PyMethodDef or memberlist structures, an object, and an attribute name, and return or set the attribute if found in the list, or raise an exception if not found. But these convenience functions had to be explicitly called by the tp_getattro or tp_setattro method of the specific type, and they did a linear search of the array using strcmp() to find the array element describing the requested attribute.

I now have a brand spanking new generic mechanism that improves this situation substantially.

  • Pointers to arrays of PyMethodDef , memberlist, getsetlist structures are part of the new type object ( tp_methods , tp_members , tp_getset ).
  • At type initialization time (in PyType_InitDict() ), for each entry in those three arrays, a descriptor object is created and placed in a dictionary that belongs to the type ( tp_dict ).
  • Descriptors are very lean objects that mostly point to the corresponding structure. An implementation detail is that all descriptors share the same object type, and a discriminator field tells what kind of descriptor it is (method, member, or getset).
  • As explained in PEP 252 , descriptors have a get() method that takes an object argument and returns that object’s attribute; descriptors for writable attributes also have a set() method that takes an object and a value and set that object’s attribute. Note that the get() object also serves as a bind() operation for methods, binding the unbound method implementation to the object.
  • Instead of providing their own tp_getattro and tp_setattro implementation, almost all built-in objects now place PyObject_GenericGetAttr and (if they have any writable attributes) PyObject_GenericSetAttr in their tp_getattro and tp_setattro slots. (Or, they can leave these NULL , and inherit them from the default base object, if they arrange for an explicit call to PyType_InitDict() for the type before the first instance is created.)
  • In the simplest case, PyObject_GenericGetAttr() does exactly one dictionary lookup: it looks up the attribute name in the type’s dictionary (obj->ob_type->tp_dict). Upon success, there are two possibilities: the descriptor has a get method, or it doesn’t. For speed, the get and set methods are type slots: tp_descr_get and tp_descr_set . If the tp_descr_get slot is non-NULL, it is called, passing the object as its only argument, and the return value from this call is the result of the getattr operation. If the tp_descr_get slot is NULL , as a fallback the descriptor itself is returned (compare class attributes that are not methods but simple values).
  • PyObject_GenericSetAttr() works very similar but uses the tp_descr_set slot and calls it with the object and the new attribute value; if the tp_descr_set slot is NULL , an AttributeError is raised.
  • But now for a more complicated case. The approach described above is suitable for most built-in objects such as lists, strings, numbers. However, some object types have a dictionary in each instance that can store arbitrary attributes. In fact, when you use a class statement to subtype an existing built-in type, you automatically get such a dictionary (unless you explicitly turn it off, using another advanced feature, __slots__ ). Let’s call this the instance dict, to distinguish it from the type dict.
  • In the more complicated case, there’s a conflict between names stored in the instance dict and names stored in the type dict. If both dicts have an entry with the same key, which one should we return? Looking at classic Python for guidance, I find conflicting rules: for class instances, the instance dict overrides the class dict, except for the special attributes (like __dict__ and __class__ ), which have priority over the instance dict.
  • Look in the type dict. If you find a data descriptor, use its get() method to produce the result. This takes care of special attributes like __dict__ and __class__ .
  • Look in the instance dict. If you find anything, that’s it. (This takes care of the requirement that normally the instance dict overrides the class dict.)
  • Look in the type dict again (in reality this uses the saved result from step 1, of course). If you find a descriptor, use its get() method; if you find something else, that’s it; if it’s not there, raise AttributeError .

This requires a classification of descriptors as data and nondata descriptors. The current implementation quite sensibly classifies member and getset descriptors as data (even if they are read-only!) and method descriptors as nondata. Non-descriptors (like function pointers or plain values) are also classified as non-data (!).

  • This scheme has one drawback: in what I assume to be the most common case, referencing an instance variable stored in the instance dict, it does two dictionary lookups, whereas the classic scheme did a quick test for attributes starting with two underscores plus a single dictionary lookup. (Although the implementation is sadly structured as instance_getattr() calling instance_getattr1() calling instance_getattr2() which finally calls PyDict_GetItem() , and the underscore test calls PyString_AsString() rather than inlining this. I wonder if optimizing the snot out of this might not be a good idea to speed up Python 2.2, if we weren’t going to rip it all out. :-)
  • A benchmark verifies that in fact this is as fast as classic instance variable lookup, so I’m no longer worried.
  • Modification for dynamic types: step 1 and 3 look in the dictionary of the type and all its base classes (in MRO sequence, or course).

Let’s look at lists. In classic Python, the method names of lists were available as the __methods__ attribute of list objects:

Under the new proposal, the __methods__ attribute no longer exists:

Instead, you can get the same information from the list type:

The new introspection API gives more information than the old one: in addition to the regular methods, it also shows the methods that are normally invoked through special notations, e.g. __iadd__ ( += ), __len__ ( len ), __ne__ ( != ). You can invoke any method from this list directly:

This is just like it is for user-defined classes.

Notice a familiar yet surprising name in the list: __init__ . This is the domain of PEP 253 .

A partial implementation of this PEP is available from CVS as a branch named “descr-branch”. To experiment with this implementation, proceed to check out Python from CVS according to the instructions at http://sourceforge.net/cvs/?group_id=5470 but add the arguments “-r descr-branch” to the cvs checkout command. (You can also start with an existing checkout and do “cvs update -r descr-branch”.) For some examples of the features described here, see the file Lib/test/test_descr.py.

Note: the code in this branch goes way beyond this PEP; it is also the experimentation area for PEP 253 (Subtyping Built-in Types).

This document has been placed in the public domain.

Source: https://github.com/python/peps/blob/main/peps/pep-0252.rst

Last modified: 2023-09-09 17:39:29 GMT

• Home Page •

The Decorator Pattern ¶

A “Structural Pattern” from the Gang of Four book

The “Decorator Pattern” ≠ Python “decorators”!

If you are interested in Python decorators like @classmethod and @contextmanager and @wraps() , then stay tuned for a later phase of this project in which I start tackling Python language features.

The Decorator Pattern can be useful in Python code! Happily, the pattern can be easier to implement in a dynamic language like Python than in the static languages where it was first practiced. Use it on the rare occasion when you need to adjust the behavior of an object that you can’t subclass but can only wrap at runtime.

  • Implementing: Static wrapper
  • Implementing: Tactical wrapper
  • Implementing: Dynamic wrapper
  • Caveat: Wrapping doesn’t actually work
  • Hack: Monkey-patch each object
  • Hack: Monkey-patch the class
  • Further Reading

The Python core developers made the terminology surrounding this design pattern more confusing than necessary by using the decorator for an entirely unrelated language feature . The timeline:

  • The design pattern was developed and named in the early 1990s by participants in the “Architecture Handbook” series of workshops that were kicked off at OOPSLA ’90, a conference for researchers of object-oriented programming languages.
  • The design pattern became famous as the “Decorator Pattern” with the 1994 publication of the Gang of Four’s Design Patterns book.
  • In 2003, the Python core developers decided to re-use the term decorator for a completely unrelated feature they were adding to Python 2.4.

Why were the Python core developers not more concerned about the name collision? It may simply be that Python’s dynamic features kept its programming community so separate from the world of design-pattern literature for heavyweight languages that the core developers never imagined that confusion could arise.

To try to keep the two concepts straight, I will use the term decorator class instead of just decorator when referring to a class that implements the Decorator Pattern.

Definition ¶

A decorator class:

  • Is an adapter (see the Adapter Pattern )
  • That implements the same interface as the object it wraps
  • That delegates method calls to the object it wraps

The decorator class’s purpose is to add to, remove from, or adjust the behaviors that the wrapped object would normally implement when its methods are called. With a decorator class, you might:

  • Log method calls that would normally work silently
  • Perform extra setup or cleanup around a method
  • Pre-process method arguments
  • Post-process return values
  • Forbid actions that the wrapped object would normally allow

These purposes might remind you of situations in which you would also think of subclassing an existing class. But the Decorator Pattern has a crucial advantage over a subclass: you can only solve a problem with a subclass when your own code is in charge of creating the objects in the first place. For example, it isn’t helpful to subclass the Python file object if a library you’re using is returning normal file objects and you have no way to intercept their construction — your new MyEvenBetterFile subclass would sit unused. A decorator class does not have that limitation. It can be wrapped around a plain old file object any time you want, without the need for you be in control when the wrapped object was created.

Implementing: Static wrapper ¶

First, let’s learn the drudgery of creating the kind of decorator class you would write in C++ or Java. We will not take advantage of the fact that Python is a dynamic language, but will instead write static (non-dynamic) code where every method and attribute appears literally, on the page.

To be complete — to provide a real guarantee that every method called and attribute manipulated on the decorator object will be backed by the real behavior of the adapted object — the decorator class will need to implement:

  • Every method of the adapted class
  • A getter for every attribute
  • A setter for every attribute
  • A deleter for every attribute

This approach is conceptually simple but, wow, it involves a lot of code!

Imagine that one library is giving you open Python file objects, and you need to pass them to another routine or library — but to debug some product issues with latency, you want to log each time that data is written to the file.

Python file objects often seem quite simple. We usually read() from them, write() to them, and not much else. But in fact the file object supports more than a dozen methods and offers five different attributes! A wrapper class that really wants to implement that full behavior runs to nearly 100 lines of code — as shown here, in our first working example of the Decorator Pattern:

So for the sake of the half-dozen lines of code at the bottom that supplement the behavior of write() and writelines() , another hundred or so lines of code wound up being necessary.

You will notice that each Python object attribute goads us into being even more verbose than Java! A typical Java attribute is implemented as exactly two methods, like getEncoding() and setEncoding() . A Python attribute, on the other hand, will in the general case need to be backed by three actions — get, set, and delete — because Python’s object model is dynamic and supports the idea that an attribute might disappear from an instance.

Of course, if the class you are decorating does not have as many methods and attributes as the Python file object we took as our example, then your wrapper will be shorter. But in the general case, writing out a full wrapper class will be tedious unless you have a tool like an IDE that can automate the process. Also, the wrapper will need to be updated in the future if the underlying object gains (or loses) any methods, arguments, or attributes.

Implementing: Tactical wrapper ¶

The wrapper in the previous section might have struck you as ridiculous. It tackled the Python file object as a general example of a class that needed to be wrapped, instead of studying the how file objects work to look for shortcuts:

  • File objects are implemented in the C language and don’t, in fact, permit deletion of any of their attributes. So our wrapper could have omitted all 6 deleter methods without any consequence, since the default behavior of a property in the absence of a deleter is to disallow deletion anyway. This would have saved 18 lines of code.
  • All file attributes except mode are read-only and raise an AttributeError if assigned to — which is the behavior if a property lacks a setter method. So 5 of our 6 setters can be omitted, saving 15 more lines of code and bringing our wrapper to ⅓ its original length without sacrificing correctness.

It might also have occurred to you that the code to which you are passing the wrapper is unlikely to call every single file method that exists. What if it only calls two methods? Or only one? In many cases a programmer has found that a trivial wrapper like the following will perfectly satisfy real-world code that just wants to write to a file:

Yes, this can admittedly be a bit dangerous. A routine that seems so happy with a minimal wrapper like this can suddenly fail later if rare circumstances make it dig into methods or attributes that you never implemented because you never saw it use them. Even if you audit the library’s code and are sure it can never call any method besides write() , that could change the next time you upgrade the library to a new version.

In a more formal programming language, a duck typing requirement like “this function requires a file object” would likely be replaced with an exact specification like “this argument needs to support a writelines() method” or “pass an object that offers every methods in the interface IWritableFile .” But most Python code lacks this precision and will force you, as the author of a wrapper class, to decide where to draw the line between the magnificent pedantry of wrapping every possible method and the danger of not wrapping enough.

Implementing: Dynamic wrapper ¶

A very common approach to the Decorator Pattern in Python is the dynamic wrapper. Instead of trying to implement a method and property for every method and attribute on the wrapped object, a dynamic wrapper intercepts live attribute accesses as the program executes and responds by trying to access the same attribute on the wrapped object.

A dynamic wrapper implements the dunder methods __getattr__() , __setattr__() , and — if it really wants to be feature-complete — __delattr__() and responds to each of them by performing the equivalent operation on the wrapped object. Because __getattr__() is only invoked for attributes that are in fact missing on the wrapper, the wrapper is free to offer real implementations of any methods or properties it wants to intercept.

There are a few edge cases that prevent every attribute access from being handled with __getattr__() . For example, if the wrapped object is iterable, then the basic operation iter() will fail on the wrapper if the wrapper is not given a real __iter__() method of its own. Similarly, even if the wrapped object is an iterator, next() will fail unless the wrapper offers a real __next__() , because these two operations examine an object’s class for dunder methods instead of hitting the object directly with a __getattr__() .

As a result of these special cases, a getattr-powered wrapper usually involves at least a half-dozen methods in addition to the methods you specifically want to specialize:

As you can see, the code can be quite economical compared to the vast slate of methods we saw earlier in WriteLoggingFile1 for manually implementing every possible attribute.

This extra level of indirection does carry a small performance penalty for every attribute access, but is usually preferred to the burden of writing a static wrapper.

Dynamic wrappers also offer pleasant insulation against changes that might happen in the future to the object being wrapped. If a future version of Python adds or removes an attribute or method from the file object, the code of WriteLoggingFile3 will require no change at all.

Caveat: Wrapping doesn’t actually work ¶

If Python didn’t support introspection — if the only operation you could perform on an object was attribute lookup, whether statically through an identifier like f.write or dynamically via getattr(f, attrname) string lookup — then a decorator could be foolproof. As long as every attribute lookup that succeeds on the wrapped object will return the same sort of value when performed on the wrapper, then other Python code would never know the difference.

But Python is not merely a dynamic programming language; it also supports introspection. And introspection is the downfall of the Decorator Pattern. If the code to which you pass the wrapper decides to look deeper, all kinds of differences become apparent. The native file object, for example, is buttressed with many private methods and attributes:

Your wrapper, on the other hand — if you have crafted it around the file’s public interface — will lack all of those private accouterments. Behind your carefully implemented public methods and attributes are the bare dunder methods of a generic Python object , plus the few you had to implement to maintain compatibility:

The tactical wrapper, of course, looks spectacularly different than a real file object, because it does not even attempt to provide the full range of methods available on the wrapped object:

More interesting is the getattr wrapper. Even though, in practice, it offers access to every attribute and method of the wrapped class, they are completely missing from its dir() because each attribute only springs into existence when accessed by name.

Could even these differences be ironed out? If you scroll through the many dunder methods in the Python Data Model , your might be struck by a sudden wild hope when you see the __dir__ method — surely this is the final secret to camouflaging your wrapper?

Alas, it will not be enough. Even if you implement __dir__() and forward it through to the wrapped object, Python special-cases the __dict__ attribute — accessing it always provides direct access to the dictionary that holds a Python class instance’s attributes.

You might begin to think of even more obscure ways to subvert Python’s introspection — at this point you might already be thinking of __slots__ , for example — but all roads lead to the same place. However clever and obscure your maneuvers, at least a small chink will still be left in your wrapper’s armor which will allow careful enough introspection to see the difference. Thus we are lead to a conclusion:

The Decorator Pattern in Python supports programming — but not metaprogramming . Code that is happy to simply access attributes will be happy to accept a Decorator Pattern wrapper instead. But code that indulges in introspection will see the difference.

Among other things, Python code that attempts to list an object’s attributes, examine its __class__ , or directly access its __dict__ will see differences between the object it expected and the decorator object you have in fact given it instead. Well-written application code would never do such things, of course — they are necessary only when implementing a developer tool like a framework, test harness, or debugger. But as you don’t always have the option of dealing solely with well-written libraries, be prepared to see and work around any symptoms of intrusive introspection as you deploy the Decorator Pattern.

Hack: Monkey-patch each object ¶

There are two final approaches to decoration based on the questionable practice of monkey patching. The first approach takes each object that needs decoration and installs a new method directly on the object, shadowing the official method that remains on the class itself.

If you have ever attempted this maneuver yourself, you might have run aground on the fact that a function installed on a Python object instance does not receive an automatic self argument — instead, it sees only the arguments with which it is literally invoked. So a first try at supplementing a file’s write() with logging:

— will die with an error because the new method sees only one argument, not two:

The quick way to resolve the dilemma is to do the binding yourself, by providing the object instance to the closure that wraps the new method itself:

While clunky, this approach does let you update the action of a single method on a single object instance while leaving the entire rest of its behavior alone.

Hack: Monkey-patch the class ¶

Another approach you might see in the wild is to create a subclass that has the desired behaviors overridden, and then to surgically change the class of the object instance. This is not, alas, possible in the general case, and in fact fails for our example here because the file class, like all built-in classes, does not permit assignment to its __class__ attribute:

But in cases where the surgery does work, you will have an object whose behavior is that of your subclass rather than of its original class.

Further Reading ¶

  • If dynamic wrappers and monkey patching spur your interest, check out Graham Dumpleton’s wrapt library , his accompanying series of blog posts , and his monkey patching talk at Kiwi PyCon that delve deep into the arcane technical details of the practice.

© 2018–2020  Brandon Rhodes

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic when setting __class__ with different layout #4634

@qingshi163

qingshi163 commented Mar 5, 2023

@qingshi163

youknowone commented Mar 5, 2023

Sorry, something went wrong.

No branches or pull requests

@youknowone

IMAGES

  1. TypeError: Assignment to Constant Variable in JavaScript

    typeerror __class__ assignment

  2. "Fixing TypeError: 'range' object does not support item assignment"

    typeerror __class__ assignment

  3. 报错解决: Assignment to constant variable. TypeError: Assignment to

    typeerror __class__ assignment

  4. Typeerror assignment to constant variable [SOLVED]

    typeerror __class__ assignment

  5. TypeError: Class() takes no arguments in Python [Solved]

    typeerror __class__ assignment

  6. TypeError: Assignment to Constant Variable in JavaScript

    typeerror __class__ assignment

VIDEO

  1. TypeError: list indices must be integers or slices, not str

  2. Resolving the TypeError: Cannot read property 'path' of undefined in Node.js File Upload

  3. Troubleshooting TypeError: "unhashable type: 'dict'" in Python URL Fetch Requests

  4. How to Fix TypeError: Unhashable Type 'list' in My Python Binary Search Implementation?

  5. Submitting assignments in Google Classroom

  6. 11.TypeError: Cannot read properties of undefined Postman Tool

COMMENTS

  1. TypeError: __class__ assignment only supported for heap types or

    >>> object().__class__ = int Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: __class__ assignment only supported for heap types or ModuleType subclasses So you need to be more specific about which attributes should be copied.

  2. python

    The __class__ attribute has always been restricted in what is acceptable, and Python classes can be a lot more flexible than types defined in C are.. For example, the __slots__ section in the datamodel documentation states:. __class__ assignment works only if both classes have the same __slots__.. and the same document calls the instance.__class__ attribute read only:

  3. TypeError: __class__ assignment: 'A' object layout differs from 'B

    In Heptapod by bitbucket_importer on Aug 2, 2016, 03:22 Created originally on Bitbucket by CraigRodrigues (Craig Rodrigues) Alex Gaynor was able to come up with this test case which passes under CPython but fails under Pypy: class A(obje...

  4. Int and float different behavior with TypeError "object layout differs

    Minimal repro to illustrate my question: class A(int): pass class B(int): pass a = A(1) a.__class__ = B results in TypeError: __class__ assignment: 'B' object layout differs from 'A' The very same code for float won't throw any error: class C(float): pass class D(float): pass c = C(1.2) c.__class__ = D I also found out that if I add __slots__ = () attribute to both A and B definitions no ...

  5. Incorrect error message on invalid __class__ assignments #73980

    TypeError: __class__ assignment only supported for heap types or ModuleType subclasses However, the actual restriction doesn't require a subclass of "ModuleType"; the below code works: import random class M ( type ( random )): pass random . __class__ = M

  6. Issue 35048: Can't reassign __class__ despite the assigned ...

    The relevant logic is in the compatible_for_assignment() function on line 3972 in Objects/typeobject.c. After checking that the tp_free slots are the same, it proceeds as follows: /* It's tricky to tell if two arbitrary types are sufficiently compatible as to be interchangeable; e.g., even if they have the same tp_basicsize, they might have ...

  7. __class__ assignment error message confusing #48850

    BPO 4600 Nosy @terryjreedy, @benjaminp, @aroberge, @merwok, @Trundle, @jonashaag, @florentx, @iritkatriel Files 4600.diff Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current sta...

  8. Issue 4600: __class__ assignment error message confusing

    Proposal: TypeError: __class__ assignment: only for instances of classes defined by class statements. msg77381 - Author: Terry J. Reedy (terry.reedy) * Date: 2008-12-09 00:59; Related issue that applies to recent 2.x. Language/data model/Objects, values, and types says 'An object's type is also unchangeable.'. This should be changed or deleted.

  9. assigning to __class__ for an extension type: Is it still possible?

    TypeError: __class__ assignment: '_PClass' object layout differs from 'PClass' It seems thus that this approach is not possible, but problem is that delegation ( embedding a _PClass instance in PClass (lets say as _base attribute) and automatically forwarding all

  10. Assigning to __class__ attribute

    TypeError: __class__ assignment: only for heap types (If you recall, y's class is object, the superclass of x.) Apparently Spam is a "heap type" (whatever that is) but its superclass, object, isn't. This definitely rattles my notions of inheritance: since the definition of Spam was empty, I didn't expect it to have any significant

  11. PEP 252

    Generally, such assignments should be considered off-limits. A future PEP may define some semantics for some such assignments. (Especially because currently instances support assignment to __class__ and __dict__, and classes support assignment to __bases__ and __dict__.) Specification of the attribute descriptor API

  12. TypeError: __class__ assignment: only for heap types ...

    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window.

  13. The Decorator Pattern

    The Python core developers made the terminology surrounding this design pattern more confusing than necessary by using the decorator for an entirely unrelated language feature.The timeline: The design pattern was developed and named in the early 1990s by participants in the "Architecture Handbook" series of workshops that were kicked off at OOPSLA '90, a conference for researchers of ...

  14. autoreload gratuitous "object layout differs" error #13347

    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window.

  15. python

    Results in: TypeError: __class__ assignment: 'MyIntTwo' object layout differs from 'MyIntOne' While all the same for float does not throw any errors: class MyFloatOne(float): pass class MyFloatTwo(float): pass b = MyFloat1(1.5) b.__class__ = MyFloatTwo I know ...

  16. Panic when setting __class__ with different layout #4634

    Saved searches Use saved searches to filter your results more quickly