The best reproducibility tool is to make a log of your actions, something like this:
experiment/input ; expected ; observation/output ; current hypothesis and if supported or rejected
exp1 ; expected1 ; obs1 ; some fancy hypothesis, supported
This can be written down on a paper, but, if your experiments fit in a computational framework, you can use computational tools to partly or completely automate that logging process (particularly by helping you track the input datasets which can be huge, and the output figures).
A great reproducibility tool for Python with a low learning curve is of course IPython/Jupyter Notebook (don't forget the %logon and %logstart magics). Tip: to make sure your notebook is reproducible, restart the kernel and try to run all cells from top to bottom (button Run All Cells): if it works, then save everything in an archive file ("freezing"), else, notably if you need to run cells in a non linear and non sequential and non obvious fashion to avoid errors, you need to rework a bit.
Another great tool that is very recent (2015) is recipy, which is very like sumatra (see below), but made specifically for Python. I don't know if it works with Jupyter Notebooks, but I know the author frequently uses them so I guess that if it's not currently supported, it will be in the future.
Git is also awesome, and it's not tied to Python. It will help you not only to keep a history of all your experiments, code, datasets, figures, etc. but also provide you with tools to maintain (git pickaxe), collaborate (blame) and debug (git-bisect) using a scientific method of debugging (called delta debugging). Here's a story of a fictional researcher trying to make his own experiments logging system, until it ends up being a facsimile of Git.
Another general tool working with any language (with a Python API on pypi) is Sumatra, which is specifically designed to help you do replicable research (replicable aims to produce the same results given the exact same code and softwares, whereas reproducibility aims to produce the same results given any medium, which is a lot harder and time consuming and not automatable).
Here is how Sumatra works: for each experiment that you conduct through Sumatra, this software will act like a "save game state" often found in videogames. More precisely, it will will save:
- all the parameters you provided;
- the exact sourcecode state of your whole experimental application and config files;
- the output/plots/results and also any file produced by your experimental application.
It will then construct a database with the timestamp and other metadatas for each of your experiments, that you can later crawl using the webGUI. Since Sumatra saved the full state of your application for a specific experiment at one specific point in time, you can restore the code that produced a specific result at any moment you want, thus you have replicable research at a low cost (except for storage if you work on huge datasets, but you can configure exceptions if you don't want to save everything everytime).
Another awesome tool is GNOME's Zeitgeist (previously coded in Python but now ported to Vala), an all-compassing action journaling system, which records everything you do and it can use machine learning to summarize for a time period you want the relationship between items based on similarity and usage patterns, eg answering questions like "What was most relevant to me, while I was working on project X, for a month last year?". Interestingly, Zim Desktop Wiki, a note-taking app similar to Evernote, has a plugin to work with Zeitgeist.
In the end, you can use either Git or Sumatra or any other software you want, they will provide you with about the same replicability power, but Sumatra is specifically tailored for scientific research so it provides a few fancy tools like a web GUI to crawl your results, while Git is more tailored towards code maintenance (but it has debugging tools like git-bisect so if your experiments involve codes, it may actually be better). Or of course you can use both!
/EDIT: dsign touched a very important point here: the replicability of your setup is as important as the replicability of your application. In other words, you should at least provide a full list of the libraries and compilers you used along with their exact versions and the details of your platform.
Personally, in scientific computing with Python, I have found that packaging an application along with the libraries is just too painful, thus I now just use an all-in-one scientific python package such as Anaconda (with the great package manager conda), and just advise users to use the same package. Another solution could be to provide a script to automatically generate a virtualenv, or to package everything using the commercial Docker application as cited by dsign or the opensource Vagrant (with for example pylearn2-in-a-box which use Vagrant to produce an easily redistributable virtual environment package).
Finally, to really ensure that you have a fully working environment everytime you need, you can make a virtual machine (see VirtualBox), and you can even save the state of the machine (snapshot) with your experiment ready to run inside. Then you can just share this virtual machine with everything included so that anyone can replicate your experiment with your exact setup. This is probably the best way to replicate a software based experiment. Containers might be a more lightweight alternative, but they do not include the whole environment, so that the replication fidelity will be less robust.
/EDIT2: Here's a great video summarizing (for debugging but this can also be applied to research) what is fundamental to do reproducible research: logging your experiments and each other steps of the scientific method, a sort of "explicit experimenting".