jekyll python命令行工具


An easy jekyllclient to manager your posts in your local host.

How to use

That's easy to use once you configure it :) !

All the configure items are in the conf/blog.conf and it easy to configure ! If you change the location of this config file, you need to set JEKYLL_CONF environment variable as follows:

export JEKYLL_CONF=path/to/yourconf

All your config items should be in site section as follows:

base = <the root path of your blog>
posts = <the posts path, default '_posts'>
username = <your usename>

Once you finish the work above, Let's go!

Don't know how to work ? Run './blog help' to show help messages:

usage: blog [--version] [--debug] <subcommand> ...

A shell to manager blog

positional arguments:
    create         Create a new post.
    delete         Delete specified post.
    list           List all the posts.
    ls             Equivalent to list.
    show           Read a Post
    help           Display help about this program or one of its subcomands.

optional arguments:
  --version        show program's version number and exit
  --debug          Print debugging output

See "blog help COMMAND" for help on a specific command.

Just as shown above, you can run blog help COMMAND for help on a specific command.

Want to list all your posts ? Just run ./blog list:

| title                         | filetype | date       |
| scriptnote                    | markdown | 2015-04-26 |
| HowToInstallJekyll            | markdown | 2015-04-26 |
| java-concurrency              | markdown | 2014-09-20 |

A Pretty table ? Thanks to python prettytab.

If you want to show more details on the posts, run ./blog list -d:

| title                           | date       | category | layout | tags                         | filetype | comments |
| HelloWorld1                     | 2015-03-18 | linux    | post   | bash                         | markdown | true     |
| HelloWrold2                     | 2014-11-30 | linux    | post   | c                            | markdown | true     |

You'd like to show the contend of specified post ? Just run ./blog show -t HelloWorld1.

You can also set your style to read the post, choice from cat, less, more styles.

Once you get a new fresh idea and want to write down to your blog, run blog create, this smart script will set your metedata automatically, and call your editor depend on your EDITOR environment

Let's record our life by jekyll from now!

Issure & PR

Yes, welcome!



python中调用a.xx,内部就是a.__getattr__(xx)或者getattr(a, xx),而a.xx(),其中xx实现了__call__()方法,即调用了getattr(a, xx)()。


class Test:
	def __init__(self): = 5
	def get(self):
		print("getting ...")
	def update(self):
		print("updating ...")
	def delete(self):
		print("deleting ...")
class Wapper:
	def __init__(self, backend = None):
		self.backend = backend
	def __getattr__(self, key):
		return getattr(self.backend, key)
if __name__ == "__main__":
	test = Test()
	wapper = Wapper(backend = test)


from nova.openstack.common.db import api as db_api
_BACKEND_MAPPING = {'sqlalchemy': 'nova.db.sqlalchemy.api'}
IMPL = db_api.DBAPI(backend_mapping=_BACKEND_MAPPING)
def compute_node_get_all(context, no_date_fields=False):
    """Get all computeNodes.

    :param context: The security context
    :param no_date_fields: If set to True, excludes 'created_at', 'updated_at',
                           'deteled_at' and 'deleted' fields from the output,
                           thus significantly reducing its size.
                           Set to False by default

    :returns: List of dictionaries each containing compute node properties,
              including corresponding service and stats
    return IMPL.compute_node_get_all(context, no_date_fields)


class DBAPI(object):
    def __init__(self, backend_mapping=None):
        if backend_mapping is None:
            backend_mapping = {}
        self.__backend = None
        self.__backend_mapping = backend_mapping

    @lockutils.synchronized('dbapi_backend', 'nova-')
    def __get_backend(self):
        """Get the actual backend.  May be a module or an instance of
        a class.  Doesn't matter to us.  We do this synchronized as it's
        possible multiple greenthreads started very quickly trying to do
        DB calls and eventlet can switch threads before self.__backend gets
        if self.__backend:
            # Another thread assigned it
            return self.__backend
        backend_name = CONF.database.backend
        self.__use_tpool = CONF.database.use_tpool
        if self.__use_tpool:
            from eventlet import tpool
            self.__tpool = tpool
        # Import the untranslated name if we don't have a
        # mapping.
        backend_path = self.__backend_mapping.get(backend_name,
        backend_mod = importutils.import_module(backend_path)
        self.__backend = backend_mod.get_backend()
        return self.__backend

    def __getattr__(self, key):
        backend = self.__backend or self.__get_backend()
        attr = getattr(backend, key)
        if not self.__use_tpool or not hasattr(attr, '__call__'):
            return attr

        def tpool_wrapper(*args, **kwargs):
            return self.__tpool.execute(attr, *args, **kwargs)

        functools.update_wrapper(tpool_wrapper, attr)
        return tpool_wrapper


def compute_node_get_all(context, no_date_fields):

    # NOTE(msdubov): Using lower-level 'select' queries and joining the tables
    #                manually here allows to gain 3x speed-up and to have 5x
    #                less network load / memory usage compared to the sqla ORM.

    engine = get_engine()

    # Retrieve ComputeNode, Service, Stat.
    compute_node = models.ComputeNode.__table__
    service = models.Service.__table__
    stat = models.ComputeNodeStat.__table__

    with engine.begin() as conn:
        redundant_columns = set(['deleted_at', 'created_at', 'updated_at',
                                 'deleted']) if no_date_fields else set([])

        def filter_columns(table):
            return [c for c in table.c if not in redundant_columns]

        compute_node_query = select(filter_columns(compute_node)).\
                                where(compute_node.c.deleted == 0).\
        compute_node_rows = conn.execute(compute_node_query).fetchall()

        service_query = select(filter_columns(service)).\
                            where((service.c.deleted == 0) &
                                  (service.c.binary == 'nova-compute')).\
        service_rows = conn.execute(service_query).fetchall()

        stat_query = select(filter_columns(stat)).\
                        where(stat.c.deleted == 0).\
        stat_rows = conn.execute(stat_query).fetchall()

    # NOTE(msdubov): Transferring sqla.RowProxy objects to dicts.
    stats = [dict(proxy.items()) for proxy in stat_rows]

    # Join ComputeNode & Service manually.
    services = {}
    for proxy in service_rows:
        services[proxy['id']] = dict(proxy.items())

    compute_nodes = []
    for proxy in compute_node_rows:
        node = dict(proxy.items())
        node['service'] = services.get(proxy['service_id'])


    # Join ComputeNode & ComputeNodeStat manually.
    # NOTE(msdubov): ComputeNode and ComputeNodeStat map 1-to-Many.
    #                Running time is (asymptotically) optimal due to the use
    #                of iterators (itertools.groupby() for ComputeNodeStat and
    #                iter() for ComputeNode) - we handle each record only once.
    compute_nodes.sort(key=lambda node: node['id'])
    compute_nodes_iter = iter(compute_nodes)
    for nid, nsts in itertools.groupby(stats, lambda s: s['compute_node_id']):
        for node in compute_nodes_iter:
            if node['id'] == nid:
                node['stats'] = list(nsts)
                node['stats'] = []

    return compute_nodes