python的__getattr__方法

int32位 posted @ Oct 09, 2014 10:05:34 AM in python , 2147 阅读
转载请注明:http://krystism.is-programmer.com/若有错误,请多多指正,谢谢!

python中调用a.xx,内部就是a.__getattr__(xx)或者getattr(a, xx),而a.xx(),其中xx实现了__call__()方法,即调用了getattr(a, xx)()。

但python的灵活之处在于可以重写__getattr__方法,通过这个方式可以包装一个类,使其中一个类看起来具有另一个类的方法,非常像继承获取的方法(其实应该是通过组合方法获取,但调用时更方便)。看一下代码:

class Test:
	def __init__(self):
		self.id = 5
	def get(self):
		print("getting ...")
	def update(self):
		print("updating ...")
	def delete(self):
		print("deleting ...")
class Wapper:
	def __init__(self, backend = None):
		self.backend = backend
	def __getattr__(self, key):
		return getattr(self.backend, key)
if __name__ == "__main__":
	test = Test()
	wapper = Wapper(backend = test)
	wapper.get()
	print(wapper.id)

Wapper类并没有id属性,也没有get、update、delete方法,但Wapper的实例可以直接获取id属性和调用get、delete、update方法,就像是自己的属性一样。这在openstack中db中应用,nova.db.api部分代码:

from nova.openstack.common.db import api as db_api
_BACKEND_MAPPING = {'sqlalchemy': 'nova.db.sqlalchemy.api'}
IMPL = db_api.DBAPI(backend_mapping=_BACKEND_MAPPING)
def compute_node_get_all(context, no_date_fields=False):
    """Get all computeNodes.

    :param context: The security context
    :param no_date_fields: If set to True, excludes 'created_at', 'updated_at',
                           'deteled_at' and 'deleted' fields from the output,
                           thus significantly reducing its size.
                           Set to False by default

    :returns: List of dictionaries each containing compute node properties,
              including corresponding service and stats
    """
    return IMPL.compute_node_get_all(context, no_date_fields)

可以看出这里的实现都是通过调用IMPL中的方法直接返回的,而IMPL是nova.openstack.common.db.DBAPI实例,我们查看其代码:

class DBAPI(object):
    def __init__(self, backend_mapping=None):
        if backend_mapping is None:
            backend_mapping = {}
        self.__backend = None
        self.__backend_mapping = backend_mapping

    @lockutils.synchronized('dbapi_backend', 'nova-')
    def __get_backend(self):
        """Get the actual backend.  May be a module or an instance of
        a class.  Doesn't matter to us.  We do this synchronized as it's
        possible multiple greenthreads started very quickly trying to do
        DB calls and eventlet can switch threads before self.__backend gets
        assigned.
        """
        if self.__backend:
            # Another thread assigned it
            return self.__backend
        backend_name = CONF.database.backend
        self.__use_tpool = CONF.database.use_tpool
        if self.__use_tpool:
            from eventlet import tpool
            self.__tpool = tpool
        # Import the untranslated name if we don't have a
        # mapping.
        backend_path = self.__backend_mapping.get(backend_name,
                                                  backend_name)
        backend_mod = importutils.import_module(backend_path)
        self.__backend = backend_mod.get_backend()
        return self.__backend

    def __getattr__(self, key):
        backend = self.__backend or self.__get_backend()
        attr = getattr(backend, key)
        if not self.__use_tpool or not hasattr(attr, '__call__'):
            return attr

        def tpool_wrapper(*args, **kwargs):
            return self.__tpool.execute(attr, *args, **kwargs)

        functools.update_wrapper(tpool_wrapper, attr)
        return tpool_wrapper
 

我们期望的是DBAPI一堆实现方法,可我们咋一看只有一个核心方法__getattr__,而且它直接继承object,那它怎么会有那么多方法呢?其中的奥秘就在于__getattr_方法,返回的实质是backend中的方法,回到nova.db.api代码,我们发现backend就是nova.db.sqlalchemy.api,因此调用DBAPI方法,实质就是调用的nova.db.sqlchemy.api方法。下面是nova.db.sqlalchemy.api部分代码:

@require_admin_context
def compute_node_get_all(context, no_date_fields):

    # NOTE(msdubov): Using lower-level 'select' queries and joining the tables
    #                manually here allows to gain 3x speed-up and to have 5x
    #                less network load / memory usage compared to the sqla ORM.

    engine = get_engine()

    # Retrieve ComputeNode, Service, Stat.
    compute_node = models.ComputeNode.__table__
    service = models.Service.__table__
    stat = models.ComputeNodeStat.__table__

    with engine.begin() as conn:
        redundant_columns = set(['deleted_at', 'created_at', 'updated_at',
                                 'deleted']) if no_date_fields else set([])

        def filter_columns(table):
            return [c for c in table.c if c.name not in redundant_columns]

        compute_node_query = select(filter_columns(compute_node)).\
                                where(compute_node.c.deleted == 0).\
                                order_by(compute_node.c.service_id)
        compute_node_rows = conn.execute(compute_node_query).fetchall()

        service_query = select(filter_columns(service)).\
                            where((service.c.deleted == 0) &
                                  (service.c.binary == 'nova-compute')).\
                            order_by(service.c.id)
        service_rows = conn.execute(service_query).fetchall()

        stat_query = select(filter_columns(stat)).\
                        where(stat.c.deleted == 0).\
                        order_by(stat.c.compute_node_id)
        stat_rows = conn.execute(stat_query).fetchall()

    # NOTE(msdubov): Transferring sqla.RowProxy objects to dicts.
    stats = [dict(proxy.items()) for proxy in stat_rows]

    # Join ComputeNode & Service manually.
    services = {}
    for proxy in service_rows:
        services[proxy['id']] = dict(proxy.items())

    compute_nodes = []
    for proxy in compute_node_rows:
        node = dict(proxy.items())
        node['service'] = services.get(proxy['service_id'])

        compute_nodes.append(node)

    # Join ComputeNode & ComputeNodeStat manually.
    # NOTE(msdubov): ComputeNode and ComputeNodeStat map 1-to-Many.
    #                Running time is (asymptotically) optimal due to the use
    #                of iterators (itertools.groupby() for ComputeNodeStat and
    #                iter() for ComputeNode) - we handle each record only once.
    compute_nodes.sort(key=lambda node: node['id'])
    compute_nodes_iter = iter(compute_nodes)
    for nid, nsts in itertools.groupby(stats, lambda s: s['compute_node_id']):
        for node in compute_nodes_iter:
            if node['id'] == nid:
                node['stats'] = list(nsts)
                break
            else:
                node['stats'] = []

    return compute_nodes

真正的实现在nova.db.sqlchemy.api中。

转载请注明:http://krystism.is-programmer.com/若有错误,请多多指正,谢谢!
  • 无匹配
  • 无匹配

登录 *


loading captcha image...
(输入验证码)
or Ctrl+Enter