Yes that's right, you're importing 31 new modules just to log a string. This is only the beginning, I could go on and on.
At most python shops I've worked, we replace logging.py with a 100-200 line module that outputs strings to stderr at levels configurable per-module. You can pipe that logging output into programs that timestamp each line, or ship them over the network, or send email notifications on errors, etc. There are plenty of good reasons why your program shouldn't concern itself with the delivery of logging messages beyond sending them to stderr.
Unfortunately, this particular battery included is a D, even if you only needed an AAA.
the complete opposite of the "do one thing and do it well" philosophy.
This is like saying subprocess is a mess because it does more than just call system(). It does one thing: logging, and it does it well.
There are plenty of good reasons why your program shouldn't concern itself with the delivery of logging messages beyond sending them to stderr.
So who, outside my program, am I to give information about which messages should go to syslog, which to stderr, and which sent to a remote logserver? You're advocating the use of dumb pipes and message parsing for something that can be perfectly well done in-process, where all the state is available.
When I had to optimise some Python code, the logging module was the first module that got replaced. I have no idea what it was doing, but it was causing a 10x or so hit to performance. Writing my own logging.py (with the same API so I didn't have to change any other code) gave me a massive improvement.
Yes that's right, you're importing 31 new modules just to log a string.
Does it really matter? Maybe the use of namespaces and such isn't ideal, but let's be honest, if importing a few modules is going to make someone lose any sleep, Python isn't really the language for them anyway.
At most python shops I've worked, we replace logging.py with a 100-200 line module that outputs strings to stderr at levels configurable per-module.
Well, OK, that's your choice.
In contrast, we have a system built on the standard Python logging package that provides several loggers. Each logger has its own customised formatting, including tidy recording of multi-line log entries if required. Each logger records to (among other things) different files on disk. We're also setting up e-mail alerts and database-backed logging at the moment for some loggers/levels. Since the originating log requests come from multiple web server processes, we co-ordinate everything by running a centralised control process that runs the real handlers, accepting log entries from any of the other processes via a socket, thus avoiding any nasty race conditions with the concurrency.
We use a couple of tricks that aren't available out-of-the-box, but since we're using the standard logging package it was easy to find web pages with working examples of how to do similar things and implement what we needed for our own system.
The code required to record a log entry is still the same one-liner it would be to print something simple to stdout. The total overhead for all of the multi-process, multi-handler, multi-formatted logging system is just over 100 lines.
I can't speak for anyone else, but I doubt I'm a good enough programmer to achieve all of that functionality in 100-200 lines starting from scratch. Even if I were, I doubt I could implement it in only the couple of hours we spent setting up the system I've described here.
Python has its shared of dubious included batteries, but IMHO a powerful, flexible logging framework is one of its stronger assets. Sure, it's mildly irritating to type a few lines of boilerplate if you want to use the same system for something that doesn't really need all of that power and flexibility, but even then, it's probably fewer than a dozen lines of code that you're going to type once in the entire lifetime of a project, or that you're going to shove into your own logger module and just import every time you start a new quick-and-dirty project.
the trouble is that the alternative is a pile of different solutions. it gets worse when 3rd party libraries dream up their own approaches. if everyone used the standard life would be much easier.
the advantages of a consistent log interface across an entire system outweigh the poor implementation and irritable maintainer.
i guess what we really need is a new, better implementation of the same api...
if you take all the python libraries, multiply them all by the five or six lines of completely custom, non-standard, everyone-does-it-differently lines of logging code you're encouraging everyone write, then we have way, way more than 100-200 lines of logging code. Except unlike logging.py's lines of code, none of it works together.
I personally prefer Logbook [1]. It's written by mitsuhiko (author of Werkzeug, Flask, and lots more), and makes logging very easy. The whole idea of using with statements to manage handlers is a bit tricky to wrap one's head around, but I find it much nicer to use.
The boilerplate required to enable logging is just too much. This is why people rarely do it and then we end up with a million types of logging styles.
I think all the complaints are overblown... maybe it could be modularized some more, but of all things, I've never had any problems with logging. All the advanced features are there for a reason and work well.
logging.basicConfig() does it in one line, but usually the config file system is used for larger apps. that system is essential so that the logging output of different sub-components can be individually configured and piped to different streams.
If you want lots of detail (which, eventually, you should), then those absolutely do a great job. But if you want a quick-and-dirty intro just to get down to business, then this sort of post does a great job. I mean, clearly it's benefited some folks, so it serves some good purpose, right? :)
What's wrong is giving slightly bad suggestions, like keeping a global 'logger' variable. There's absolutely no need to do that, just use a import logging ; getLogger(__name__) for example.
I usually have something like self.logger = logging.getLogger(__name__) on my classes __init__() within project/package (sub)modules .
So as a library author, how do I effectively use logging? Should I? What Is there a way to embed logging in my library, turn it off by default, and allow end-users to turn it back on, when required?
If you have subsystems where different abstractions or granularities might be useful, declare a logger for each of those. Document them clearly. Make the top-level the name of your library. "libname.foo", "libname.bar", "libname.baz" are fine. Just use only one top-level namespace.
Use the logging levels appropriately. (Some libraries log everything as "warning", that's almost as bad as no logging.)
"Turn it off by default" - unless you are making a lib where performance is critical, logging overhead is noise. Your containing process will set up handlers as appropriate - they can turn off your output if they know what your loggers are.
For bonus points, provide logging configurations for "silent", "normal", and "debug".
I agree that the logging stdlib is a bit heavy for many uses. If your goal is to "just print a string", it can be a maddening array. OK:
Actually, you don't configure it at all in your library. Just import logging, then call logging.{debug,info,whatever} as needed. The idiom is to do all configuration in the app which uses your libraries, such as looking for "-d" or "-dd" in sys.argv to set the logging level.
The author of the OP has written two books for teaching children (aged 7-10 or so) to code by having them type in or copy completed code, so this tutorial just continues in his typical style.
I usually write a simple deb(msg) function in each module, in which I either print or use logging, depending of my needs. The good advice here is not to use logging, it is to avoid prints.
I had a LOT of headache trying to make a posix compliant script with that. Output to stdout, err to stderr, nothing too esoteric. The docs even have one section devoted only to that, yet was the most unpythonic time I've spent with the language... And I've done a lot of string manipulation :)
And ok, I'm dumb. But hundreds of others struggling with the same thing at stackexchange might mean something.
This page http://wiki.python.org/moin/LoggingPackage gave me a lot of indirect insight into why the logging module is the way it is. The user "VinaySajip" is the author and maintainer of the package.
He says he's receptive[0] but then dismisses every con listed with either, "Suggest an alternative,"[1] or, "Give me some form of evidence that I won't dismiss."[2] As another user said,[3] there's just far too much going on when it should default to stderr and let the dev worry about delivery.
Consider:
Yes that's right, you're importing 31 new modules just to log a string. This is only the beginning, I could go on and on.At most python shops I've worked, we replace logging.py with a 100-200 line module that outputs strings to stderr at levels configurable per-module. You can pipe that logging output into programs that timestamp each line, or ship them over the network, or send email notifications on errors, etc. There are plenty of good reasons why your program shouldn't concern itself with the delivery of logging messages beyond sending them to stderr.
Unfortunately, this particular battery included is a D, even if you only needed an AAA.