Currently a lot of my time is spent on writing services controlling models or generating reports. In these systems, the control flow moves from the user facing HTTP server to a task queue (think of celery) that handles the actual task. Depending on the setup the log output will be scattered over multiple files and has to be painstakingly pieced together - one reason, I found that debugging these kinds of applications becomes tiring quite fast.
From his talk, I gather Ka-Ping Yee had a similar problem when he created q. This small python package is now my first import for a quick and dirty debugging session. To use q, import it and call it like function:
import q as qq # qq is easier to search for
qq("foo", "bar")
When you use q, you may wonder where the output went? The beauty of q is that it
writes its output to the file $TMPDIR/q
. Thereby the debugging output is
collected in a single file and is not mixed with other output.
q can also easily be embedded into expressions, removing the need to rewrite the code when debugging. Depending on the chosen operator q prints the full expression or the next expression only:
foo(qq|1 + 2 + 3) # prints 5
foo(qq/1 + 2 + 3) # prints 1
Another feature of q is its ability to trace function calls. Used as a
decorator, it prints out all arguments and the return value of function. With
nested function calls it is particular helpful because it indents the output
nicely. Unfortunately, the way q is implemented the renamed import breaks the
@q
decorator. However, using the trace function explicitly works just fine:
@qq.trace
def say_hello(name):
print("hello {}".format(name))
To sum up, q is an incredibly useful package when debugging and I highly recommend to check it out.