HM0880 + python   176

code golf - Put together a Senate majority - Programming Puzzles & Code Golf Stack Exchange
How I generated this
Previous versions were generated by simulated annealing, but now I’ve switched strategies to late acceptance hill-climbing. I’m not sure whether it’s fundamentally better or I just got luckier this time, but I do appreciate that LAHC has fewer parameters to tweak to get good results.

from __future__ import print_function
import random
import zlib

try: range = xrange
except NameError: pass

senators = b'Alexander, Baldwin, Barrasso, Bennet, Blumenthal, Blunt, Booker, Boozman, Brown, Burr, Cantwell, Capito, Cardin, Carper, Casey, Cassidy, Cochran, Collins, Coons, Corker, Cornyn, Cortez Masto, Cotton, Crapo, Cruz, Daines, Donnelly, Duckworth, Durbin, Enzi, Ernst, Feinstein, Fischer, Flake, Franken, Gardner, Gillibrand, Graham, Grassley, Harris, Hassan, Hatch, Heinrich, Heitkamp, Heller, Hirono, Hoeven, Inhofe, Isakson, Johnson, Kaine, Kennedy, King, Klobuchar, Lankford, Leahy, Lee, Manchin, Markey, McCain, McCaskill, McConnell, Menendez, Merkley, Moran, Murkowski, Murphy, Murray, Nelson, Paul, Perdue, Peters, Portman, Reed, Risch, Roberts, Rounds, Rubio, Sanders, Sasse, Schatz, Schumer, Scott, Shaheen, Shelby, Stabenow, Strange, Sullivan, Tester, Thune, Tillis, Toomey, Udall, Van Hollen, Warner, Warren, Whitehouse, Wicker, Wyden, Young'.split(b', ')
assert len(senators) == 100

score = 9999
best = score
recent = 65536 * [score]
recent_index = 0

while True:
old_score = score
step = random.randrange(2)
if step == 0:
i, j = 100, 100
while i >= 51 and j >= 51:
i, j = random.sample(range(100), 2)
senators[i], senators[j] = senators[j], senators[i]
i, j, k = 100, 100, 100
while i >= 51:
i, j, k = sorted(random.sample(range(101), 3))
senators[i:k] = senators[j:k] + senators[i:j]

bound = max(old_score, recent[recent_index])

z = zlib.compressobj(9, zlib.DEFLATED, -zlib.MAX_WBITS, 9)
sin = b''.join(senators[:51])
sout = bytearray(z.compress(sin) + z.flush())
score = len(sout) * 8
if score - 7 <= bound:
for bit in range(7):
sout[-1] ^= 128 >> bit
if zlib.decompress(bytes(sout), -zlib.MAX_WBITS) == sin:
score -= 1
except zlib.error:

if score > bound:
if step == 0:
senators[i], senators[j] = senators[j], senators[i]
j = i + k - j
senators[i:k] = senators[j:k] + senators[i:j]
score = old_score
elif score < recent[recent_index]:
recent[recent_index] = score
if score < best:
best = score
print(score, b''.join(senators[:51]).decode())
recent_index = (recent_index + 1) % len(recent)
codegolf  algorithms  python 
january 2018 by HM0880
Step 1: Load data, look around

Step 2: Data preprocessing

Step 3: Data to vectors

Step 4: Training a model, detecting spam

Step 5: How to run experiments?

Step 6: How to tune parameters?

Step 7: Productionalizing a predictor
python  datascience 
december 2017 by HM0880
Python Graph Library - Stack Overflow
There are two excellent choices:




I like NetworkX, but I read good things about igraph as well. I routinely use NetworkX with graphs with 1 million nodes with no problem (it's about double the overhead of a dict of size V + E)

If you want a feature comparison, see this from the Networkx-discuss list

Feature comparison thread

shareimprove this answer
edited Dec 9 '14 at 14:14

John Y
answered Mar 3 '09 at 15:33

Gregg Lind
In particular, what I like about Networkx.... it's mostly in python, easy to edit and understand the source code, and it feels mostly "pythonic". – Gregg Lind Mar 3 '09 at 15:36
I was wondering, have you used it with a* or similar algorithms? – dassouki Feb 11 '10 at 18:37
I just evaluated both. networkx is installable via pip, whereas igraph is not. This makes igraph harder to use as dependencies in your files. – exhuma Aug 10 '12 at 7:46
As an update for 2013, I'm going with networkx just b/c it has a github and looks most up to date of all the options in this answer and the others – mtpain Feb 20 '13 at 17:16
@GreggLind I am using Networkx but I can see in my profiler that getting edges from large graph consumes a lot of time. Are there any guidelines or some documentation for better performance? It would be really helpful. – Naman Nov 17 '14 at 8:32
python  graphtheory 
december 2017 by HM0880
Python Advanced: Graph Theory and Graphs in Python
Graphs in Python
Origins of Graph Theory
7 bridges of Koenigsberg Before we start with the actual implementations of graphs in Python and before we start with the introduction of Python modules dealing with graphs, we want to devote ourselves to the origins of graph theory.
The origins take us back in time to the Künigsberg of the 18th century. Königsberg was a city in Prussia that time. The river Pregel flowed through the town, creating two islands. The city and the islands were connected by seven bridges as shown. The inhabitants of the city were moved by the question, if it was possible to take a walk through the town by visiting each area of the town and crossing each bridge only once? Every bridge must have been crossed completely, i.e. it is not allowed to walk halfway onto a bridge and then turn around and later cross the other half from the other side. The walk need not start and end at the same spot. Leonhard Euler solved the problem in 1735 by proving that it is not possible. He found out that the choice of a route inside each land area is irrelevant and that the only thing which mattered is the order (or the sequence) in which the bridges are crossed. He had formulated an abstraction of the problem, eliminating unnecessary facts and focussing on the land areas and the bridges connecting them. This way, he created the foundations of graph theory. If we see a "land area" as a vertex and each bridge as an edge, we have "reduced" the problem to a graph.

Introduction into Graph Theory Using Python
Simple Graph with an isolated node Before we start our treatize on possible Python representations of graphs, we want to present some general definitions of graphs and its components.
A "graph"1 in mathematics and computer science consists of "nodes", also known as "vertices". Nodes may or may not be connected with one another. In our illustration, - which is a pictorial representation of a graph, - the node "a" is connected with the node "c", but "a" is not connected with "b". The connecting line between two nodes is called an edge. If the edges between the nodes are undirected, the graph is called an undirected graph. If an edge is directed from one vertex (node) to another, a graph is called a directed graph. An directed edge is called an arc.
Though graphs may look very theoretical, many practical problems can be represented by graphs. They are often used to model problems or situations in physics, biology, psychology and above all in computer science. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation,
In the latter case, the are used to represent the data organisation, like the file system of an operating system, or communication networks. The link structure of websites can be seen as a graph as well, i.e. a directed graph, because a link is a directed edge or an arc.
Python has no built-in data type or class for graphs, but it is easy to implement them in Python. One data type is ideal for representing graphs in Python, i.e. dictionaries. The graph in our illustration can be implemented in the following way:
graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : []
python  graphtheory 
december 2017 by HM0880
Introduction to Deep Learning with Python - YouTube

Alec Radford, Head of Research at indico Data Solutions, speaking on deep learning with Python and the Theano library. The emphasis of the talk is on high performance computing, natural language processing using recurrent neural nets, and large scale learning with GPUs.

SlideShare presentation is available here:
Code is here:

indico is building the steel mill for the next industrial revolution. We are making productivity tools for data scientists at small and medium businesses by uniquely automating parts of their workflow.

Like Adobe bringing the creative suite to desktop publishing, making every designer a web developer. indico is bringing tools and workflow to machine learning, making every programmer a 10x data scientist.

Learn more at
machinelearing  python  resources 
december 2017 by HM0880
Python Fractal Landscape : Python
I watched the Nova documentary "Fractals - Hunting the Hidden Dimension" on Youtube, and was interested in the story of a computer graphics engineer at Boeing, Loren Carpenter, who created the first computer-generated mountain landscapes. He did this after reading Benoit Mandelbrot's book about fractals, and created the landscapes by diving them into progressively smaller triangles. I tried out the idea with a small Python script.

To set up an easy coordinate system, I used squares split diagonally into right-angle triangles, which could be split into smaller right-angle triangles with random elevations. That's somewhat imperfect but is enough to test the idea. I used the "Mayavi" Python library for 3D visualisation (since it was in the Ubuntu repository) to display the result.

I thought this group might be interested in the result.

My test had just 20 lines of Python.

Code on Pastebin

Code on Github

The image shown was generated with "random.seed(6)" added. Rotation, zoom and background colour were set interactively in the Mayavi visualizer.

The Python code was dashed out as a quick test, so the method of finding the correct corner points in the correct order is not very readable. Sorry about that. After writing this test, I Googled for the algorithms usually used in fractal landscape generation and found out about the fractal terrain "diamond square algorithm", so if you want to know more about this subject, then I suggest you do the same.

Edit: Here's two more landscapes on Imgur
python  reddit 
november 2017 by HM0880
Python3 Tutorial: Magic Methods

The so-called magic methods have nothing to do with wizardry. You have already seen them in previous chapters of our tutorial. They are the methods with this clumsy syntax, i.e. the double underscores at the beginning and the end. They are also hard to talk about. How do you pronounce or say a method name like __init__? "Underscore underscore init underscore underscore" sounds horrible and is nearly a tongue twister. "Double underscore init double underscore" is a lot better, but the ideal way is "dunder init dunder"1 That's why magic methods methods are sometimes called dunder methods!

So what's magic about the __init__ method? The answer is, you don't have to invoke it directly. The invocation is realized behind the scenes. When you create an instance x of a class A with the statement "x = A()", Python will do the necessary calls to __new__ and __init__.

We have encountered the concept of operator overloading many times in the course of this tutorial. We had used the plus sign to add numerical values, to concatenate strings or to combine lists:

>>> 4 + 5
>>> 3.8 + 9
>>> "Peter" + " " + "Pan"
'Peter Pan'
>>> [3,6,8] + [7,11,13]
[3, 6, 8, 7, 11, 13]
november 2017 by HM0880
Senior Python Programmers, what tricks do you want to impart to us young guns? : Python
Random braindump

Use Python 3.6, there are some significant improvements over 2.7 - like enums and fstrings (I would switch to 3.6 just for fstrings, TBH)
.open() or .close() is often a code smell - you probably should be using a with block
Use virtualenv for every project - don't install python packages at the system level. This keeps your project environment isolated and reproducible
Use the csv module for CSVs (you'd be surprised...)
Don't nest comprehensions, it makes your code hard to read (this one from the Google style guide, IIRC)
If you need a counter along with the items from the thing you're looping over, use enumerate(items)
If you're using an IDE (as a Vim user I say you're crazy if you're not using Pycharm with Ideavim) take the time to learn it's features. Especially how to use the debugger, set breakpoints, and step through code
multiprocessing, not threading
Developing with a REPL like ipython or Jupyter alongside your IDE can be very productive. I am often jumping back and forth between them. Writing pure functions makes them easy to test / develop / use in the REPL. ipython and Jupyter have helpful magics like %time and %prun for profiling
Use destructuring assignment, not indices, for multiple assignment first, second, *_ = (1,2,3,4)
Avoid *args or **kwargs unless you know you need them - it makes your function signatures hard to read, and code-completion less helpful
python  reddit 
november 2017 by HM0880
Python Tools for Visual Studio
PTVS is a free, open source plugin that turns Visual Studio into a Python IDE.

It supports CPython, IronPython, editing, browsing, IntelliSense, mixed Python/C++ debugging, remote Linux/MacOS debugging, profiling, IPython, and web development with Django and other frameworks.

From the Visual Studio 2017 installer, select the Python or Data Science workload to add Python support to Visual Studio.

Need help? You can ask questions, file bugs or request features on our issue tracker on GitHub. Our documentation can be found here.

Designed, developed, and supported by Microsoft and the community.
python  visualstudio 
november 2017 by HM0880
GitHub - Microsoft/vscode-python: Python extension for Visual Studio Code
Python extension for Visual Studio Code

A Visual Studio Code extension with rich support for the Python language (including Python 3.6), with features including the following and more:

Linting (Prospector, Pylint, pycodestyle, Flake8, pylama, pydocstyle, mypy with config files and plugins)
Intellisense (autocompletion with support for PEP 484 and PEP 526)
Auto indenting
Code formatting (autopep8, yapf, with config files)
Code refactoring (Rename, Extract Variable, Extract Method,Sort Imports)
Viewing references, code navigation, view signature
Excellent debugging support (remote debugging over SSH, mutliple threads, django, flask)
Running and debugging Unit tests (unittest, pytest, nose, with config files)
Execute file or code in a python terminal
Local help file (offline documentation)
Quick Start

Install the extension
optionally install ctags for Workspace Symbols, from here, or using brew install ctags on macOS
Select your Python interpreter
If it's already in your path then you're set
Otherwise, to select a different Python Interpreter/Version (or use a virtual environment), use the command Select Workspace Interpreter)
vscode  python 
november 2017 by HM0880
Improving Python and Spark Performance and Interoperability | Schedule | Spark Summit East 2017

Wes McKinney (Two Sigma Investments)
Thursday, February 9
11:40 AM – 12:10 PM
Ballroom B

Apache Spark has become a popular and successful way for Python programming to parallelize and scale up their data processing. In many use cases, though, a PySpark job can perform worse than equivalent job written in Scala. It is also costly to push and pull data between the user’s Python environment and the Spark master. In this talk, we’ll examine some of the the data serialization and other interoperability issues, especially with Python libraries like pandas and NumPy, that are impacting PySpark performance and work that is being done to address them. This will relate closely to other work in binary columnar serialization and data exchange tools in development such as Apache Arrow and Feather files.
python  talks 
october 2017 by HM0880
Running Jupyter notebooks on GPU on AWS: a starter guide
Running Jupyter notebooks on GPU on AWS: a starter guide
Tue 21 March 2017
By Francois Chollet
In Tutorials.
This is a step by step guide to start running deep learning Jupyter notebooks on an AWS GPU instance, while editing the notebooks from anywhere, in your browser. This is the perfect setup for deep learning research if you do not have a GPU on your local machine.

What are Jupyter notebooks? Why run Jupyter notebooks on AWS GPUs?

A Jupyter notebook is a web app that allows you to write and annotate Python code interactively. It's a great way to experiment, do research, and share what you are working on. Here's what a notebook looks like.

A lot of deep learning applications are very computationally intensive, and would take hours or even days when running on a laptop's CPU cores. Running on GPU can speed up training and inference by a considerable factor (often 5x to 10x, when going from a modern CPU to a single modern GPU). However, you may not have access to a GPU on your local machine. Running Jupyter notebooks on AWS gives you the same experience as running on your local machine, while allowing you to leverage one or several GPUs on AWS. And you only pay for what you use, which can compare favorably versus investing in your own GPU(s) if you only use deep learning occasionally.

Why would I not want to use Jupyter on AWS for deep learning?

AWS GPU instances can quickly become expensive. The one we suggest using costs $0.90 per hour. This is fine for occasional use, but if you are going to run experiments for several hours per day every day, then you are better off building your own deep learning machine, featuring a Titan X or GTX 1080 Ti.

Before you start


You will need an active AWS account.
Some familiarity with AWS EC2 will help, but isn't mandatory.
It will take 5 to 10 minutes to get set up.
jupyter  python  AWS 
october 2017 by HM0880
Install Python 3 on Chromebook
Install Python 3 on Chromebook
Jul 21, 2017

To install Python on a Chromebook we need to enter developer mode. Unfortunately the exact instructions for how to do this vary by device. Here is a link to all Chromebook versions (scroll to the bottom of the page) and their official documentation which will give you the specific instructions.

Download Anaconda
Anaconda is an open source package that will let us run Python. Go to the website and download the Linux version.

Make sure to click the button for the 64-BIT INSTALLER so we can install Python 3.6. This might take a while to download depending on your internet connection.

Note: If you have a 32-bit Chromebook, such as the ASUS C201 with Rockchip, download the 32-bit version of Anaconda.

Once the download has completed, check your Downloads folder to make sure it is there. You should see a folder named “”.

Now open up a Terminal window by pressing Control + Alt + T (all 3 keys at the same time). You’ll see a black screen that says “Welcome to crosh, the Chrome OS developer shell.”

Type the following commands and hit ENTER to execute them:

crosh> shell
chronos@localhost / $ sudo chmod 777 /usr/local
chronos@localhost / $ cd ~/Downloads
chronos@localhost ~/Downloads $ ls
chronos@localhost ~/Downloads $ bash
You will need to hit ENTER twice and scroll down using your arrow key to read the Anaconda license. At the bottom you’ll see the following prompt:

Do you approve the license terms? [yes|no]
>>> yes
Type yes. Next it will ask if we want to install Anaconda into the location /home/chronos/user/anaconda3. We do not. So type CTRL + C to abort this installation and instead, in the terminal, enter the location /usr/local/conda3 followed by the ENTER key:

[/home/choronos/user/anaconda3] >>> /usr/local/conda3
It takes a while for this to install. You’ll see a stream of text in your Terminal until finally you get the following prompt:

Do you wish the installer to prepend the Anaconda3 install location to PATH in your /home/choronos/user/.bashrc ? [yes|no]
We want to type yes here and hit Return:

[no] >>> yes
You should see a confirmation message that includes Thank you for installing Anaconda3!

Run Python in a new shell
Before Anaconda will work you need to exit the terminal shell by closing your current tab. Next open up a new terminal shell by pressing Control + Alt + T and then typing shell from the command prompt:

crosh> shell
chronos@localhost / $ python --version
Python 3.6.2 :: Anaconda 4.3.1 (64-bit)
And now Python is installed!

Next Steps
If you want to install additional packages, you can use the conda command to do so. Here’s a link to the official conda instructions.

Check out Django for Beginners, a free online book on how to create and deploy multiple Django applications. Starting with a simple “Hello, World” application it progresses through multiple web applications of increasing complexity showing Django best practices along the way.
chromebooks  python 
october 2017 by HM0880
How to create a single file of Sphinx based documentation | G-Loaded Journal
I used the sphinx-build utility directly to generate a single HTML document containing the entire documentation. After changing to the documentation’s root directory, I issued the command:

sphinx-build -b singlehtml . zzz
I finally had a single HTML file at zzz/index.html, which I sent to the printer and got some nice documentation in-print.
sphinx  python  resources 
september 2017 by HM0880
Monthly and yearly plans with JetBrains Toolbox
Subscription Options Student Licenses Purchase Terms
All Desktop Tools.
Monthly and Yearly plans

United States

I have an old perpetual license ?
New Subscription*
All Products Pack

Get access to all desktop products including IntelliJ IDEA Ultimate, ReSharper Ultimate and other IDEs
$ 249.00
/1st year
$ 199.00
/2nd year
$ 149.00
/3rd yr onwards
Get quote
IntelliJ IDEA Ultimate

A complete toolset for JVM-based web, mobile and enterprise development
$ 149.00
/1st year
$ 119.00
/2nd year
$ 89.00
/3rd yr onwards
Get quote

Include Rider, Cross-platform .NET IDE
ReSharper Ultimate + Rider

Visual Studio extensions, profilers, and a standalone cross-platform .NET IDE
$ 179.00
/1st year
$ 143.00
/2nd year
$ 107.00
/3rd yr onwards
Get quote

$ 139.00
/1st year
$ 111.00
/2nd year
$ 83.00
/3rd yr onwards
Get quote

$ 129.00
/1st year
$ 103.00
/2nd year
$ 77.00
/3rd yr onwards
Get quote
ReSharper C++

$ 89.00
/1st year
$ 71.00
/2nd year
$ 53.00
/3rd yr onwards
Get quote

$ 89.00
/1st year
$ 71.00
/2nd year
$ 53.00
/3rd yr onwards
Get quote

$ 89.00
/1st year
$ 71.00
/2nd year
$ 53.00
/3rd yr onwards
Get quote

$ 89.00
/1st year
$ 71.00
/2nd year
$ 53.00
/3rd yr onwards
Get quote

$ 89.00
/1st year
$ 71.00
/2nd year
$ 53.00
/3rd yr onwards
Get quote

$ 89.00
/1st year
$ 71.00
/2nd year
$ 53.00
/3rd yr onwards
Get quote

$ 89.00
/1st year
$ 71.00
/2nd year
$ 53.00
/3rd yr onwards
Get quote

$ 59.00
/1st year
$ 47.00
/2nd year
$ 35.00
/3rd yr onwards
Get quote
python  IDEs 
september 2017 by HM0880
Can you step through python code to help debug issues? - Stack Overflow
There's a Python debugger called pdb just for doing that!

You can launch a Python program through pdb by using pdb or python -m pdb

There are a few commands you can then issue, which are documented on the pdb page.

Some useful ones to remember are:

b: set a breakpoint
c: continue debugging until you hit a breakpoint
s: step through the code
n: to go to next line of code
l: list source code for the current file (default: 11 lines including the line being executed)
u: navigate up a stack frame
d: navigate down a stack frame
p: to print the value of an expression in the current context
python  resources  python_debugging 
july 2017 by HM0880
performance - How can you profile a python script? - Stack Overflow
Python includes a profiler called cProfile. It not only gives the total running time, but also times each function separately, and tells you how many times each function was called, making it easy to determine where you should make optimizations.

You can call it from within your code, or from the interpreter, like this:

import cProfile'foo()')
Even more usefully, you can invoke the cProfile when running a script:

python -m cProfile
To make it even easier, I made a little batch file called 'profile.bat':

python -m cProfile %1
So all I have to do is run:

And I get this:

1007 function calls in 0.061 CPU seconds

Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.061 0.061 <string>:1(<module>)
1000 0.051 0.000 0.051 0.000<lambda>)
1 0.005 0.005 0.061 0.061<module>)
1 0.000 0.000 0.061 0.061 {execfile}
1 0.002 0.002 0.053 0.053 {map}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects}
1 0.000 0.000 0.000 0.000 {range}
1 0.003 0.003 0.003 0.003 {sum}
EDIT: Updated link to a good video resource from PyCon 2013 titled Python Profiling.

shareimprove this answer
edited Mar 5 at 14:47

answered Feb 24 '09 at 16:01

Chris Lawlor
Also it is useful to sort the results, that can be done by -s switch, example: '-s time'. You can use cumulative/name/time/file sorting options. – Jiri Feb 25 '09 at 17:41
Unfortunately, though, you can't sort percall for either the total or cumulative times, which is a major deficiency IMO. – Joe Shaw Dec 17 '09 at 16:31
It is also worth noting that you can use the cProfile module from ipython using the magic function %prun (profile run). First import your module, and then call the main function with %prun: import euler048; %prun euler048.main() – RussellStewart Mar 31 '14 at 19:58
For visualizing cProfile dumps (created by python -m cProfile -o <out.profile> <script>), RunSnakeRun, invoked as runsnake <out.profile> is invaluable. – ikdc May 5 '14 at 1:33
@NeilG even for python 3, cprofile is still recommended over profile. – trichoplax Jan 4 '15 at 2:43
june 2017 by HM0880
pytesseract 0.1.7 : Python Package Index
pytesseract 0.1.7

Python-tesseract is a python wrapper for google's Tesseract-OCR

Python-tesseract is an optical character recognition (OCR) tool for python. That is, it will recognize and “read” the text embedded in images.

Python-tesseract is a wrapper for Google’s Tesseract-OCR Engine. It is also useful as a stand-alone invocation script to tesseract, as it can read all image types supported by the Python Imaging Library, including jpeg, png, gif, bmp, tiff, and others, whereas tesseract-ocr by default only supports tiff and bmp. Additionally, if used as a script, Python-tesseract will print the recognized text in stead of writing it to a file. Support for confidence estimates and bounding box data is planned for future releases.
python  machinelearing 
june 2017 by HM0880
Collecting Data from the Modern Web | Web Scraping with Python
Learn web scraping and crawling techniques to access unlimited data from any web source in any format. With this practical guide, you’ll learn how to use Python scripts and web APIs to gather and process data from thousands—or even millions—of web pages at once.

Ideal for programmers, security professionals, and web administrators familiar with Python, this book not only teaches basic web scraping mechanics, but also delves into more advanced topics, such as analyzing raw data or using scrapers for frontend website testing. Code samples are available to help you understand the concepts in practice.
python  scraping  books 
june 2017 by HM0880 by python-xy
Python(x,y) has five main features:

collecting scientific-oriented Python libraries and development environment tools ;
collecting almost all free related documentation ;
providing a quick guide to get started in Python / Qt / Spyder ;
providing an all-in-one setup program, so the user can install or uninstall all these packages and features by clicking on one button only.
python  IDEs  review 
may 2017 by HM0880
Pythonxy - Scientific-oriented Python Distribution based on Qt and Spyder | Hacker News
Pythonxy - Scientific-oriented Python Distribution based on Qt and Spyder (
26 points by powertry 1212 days ago | hide | past | web | 8 comments | favorite

mkl 1212 days ago [-]

I've switched to Anaconda recently:
It has all the same libraries, Spyder, etc., and seems much more versatile, with easy package management and upgrades, and the ability to have different independent environments, e.g. one for Python 2.7 and one for Python 3:
Plus it's cross-platform and works pretty much identically on Linux and Windows (and presumably Mac).

factorizer 1212 days ago [-]

This look great! Will give it a try. On Windows I've been using WinPython
which has the additional advantage of being a portable installation. If i'm not mistaken, the same people are behind WinPython and PythonXY.

alok-g 1212 days ago [-]

+1 for WinPython.
WinPython has indeed been created by a core developer of PythonXY, using his experience from PythonXY [1].
Key advantages: (A) Portable version allowing indefinite number of side-by-side installations, (B) Both 32-bit and 64-bit versions are available for both Python 2 and 3.
Downsides: Windows-only

mkl 1212 days ago [-]

Anaconda is certainly portable on Linux (all in one folder - set the bin path and everything just works), and I think elsewhere too.

pwang 1212 days ago [-]

Yes, Anaconda is designed to be portable on every platform we build it for, as are all the additional packages that we build for it.

powertry 1212 days ago [-]

Cool, I was looking for something that plays nice with windows!

numlocked 1212 days ago [-]

First, it should be noted that pythonxy is for windows. Second, I find it hard to continue recommending pythonxy because of it's lack of 64 bit support. This limits the memory consumption of the python process to 2gb, which is unacceptable in a lot of modeling scenarios. I'd recommend Anaconda, or Spyder + 64 bit versions of the python packages you need (available here for windows users:

m_mueller 1212 days ago [-]

Just to let you know (in case OP has edit access): The screenshots on the Wiki Page give a 404 error.
python  IDEs 
may 2017 by HM0880
Why does comparing strings in Python using either '==' or 'is' sometimes produce a different result? - Stack Overflow
SilentGhost and others are correct here. is is used for identity comparison, while == is used for equality comparison.

The reason this works interactively is that (most) string literals are interned by default. From Wikipedia:

Interned strings speed up string comparisons, which are sometimes a performance bottleneck in applications (such as compilers and dynamic programming language runtimes) that rely heavily on hash tables with string keys. Without interning, checking that two different strings are equal involves examining every character of both strings. This is slow for several reasons: it is inherently O(n) in the length of the strings; it typically requires reads from several regions of memory, which take time; and the reads fills up the processor cache, meaning there is less cache available for other needs. With interned strings, a simple object identity test suffices after the original intern operation; this is typically implemented as a pointer equality test, normally just a single machine instruction with no memory reference at all.
So, when you have two string literals (words that are literally typed into your program source code, surrounded by quotation marks) in your program that have the same value, the Python compiler will automatically intern the strings, making them both stored at the same memory location. (Note that this doesn't always happen, and the rules for when this happens are quite convoluted, so please don't rely on this behavior in production code!)

Since in your interactive session both strings are actually stored in the same memory location, they have the same identity, so the is operator works as expected. But if you construct a string by some other method (even if that string contains exactly the same characters), then the string may be equal, but it is not the same string -- that is, it has a different identity, because it is stored in a different place in memory.
april 2017 by HM0880
GitHub - bluenote10/PandasDataFrameGUI: A minimalistic GUI for analyzing Pandas DataFrames.
Pandas DataFrame GUI

A minimalistic GUI for analyzing Pandas DataFrames based on wxPython.


import dfgui

Tabular view of data frame
Columns are sortable (by clicking column header)
Columns can be enabled/disabled (left click on 'Columns' tab)
Columns can be rearranged (right click drag on 'Columns' tab)
Generic filtering: Write arbitrary Python expression to filter rows. Warning: Uses Python's eval -- use with care.
Histogram plots
Scatter plots
python  pandas  resources 
april 2017 by HM0880
Tracking and Surveillance Projects | Shawn Lankton Online
I took a special topics course in Spring 2008 at Georgia Tech, ECE 8893: Embedded Video Surveillance Systems. The course included three projects, each shown below. Detailed information about the algorithm is in the source code comments. (All the source is in Python)

Project 1: Activity Density Estimation

Use background subtraction to find moving foreground objects in a video sequence. Then, color-code regions with the most activity. Here is the result:


Project 2: Styrofoam Airplane Tracking

Find all white styrofoam planes in the scene and track them throughout the scene. We used color thresholding and simple dynamics to do the tracking.


Project 3: Pedestrian Tracking

Count and track the pedestrians that cross on a busy sidewalk. We use a combination of motion estimation via background subtraction and feature matching using the Bhattacharyya measure.

Final Report: p3.pdf

Most of this code is very hack-y because it was done quickly. However, it was
fun to learn Python, and the class was enjoyable overall.
python  review 
january 2017 by HM0880
Python 2.7 Tutorial
very good python/regex tutorial
regex  python 
november 2016 by HM0880
Keras Documentation
Keras: Deep Learning library for Theano and TensorFlow

You have just found Keras.

Keras is a high-level neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

Use Keras if you need a deep learning library that:

Allows for easy and fast prototyping (through total modularity, minimalism, and extensibility).
Supports both convolutional networks and recurrent networks, as well as combinations of the two.
Supports arbitrary connectivity schemes (including multi-input and multi-output training).
Runs seamlessly on CPU and GPU.
Read the documentation at

Keras is compatible with: Python 2.7-3.5.

Guiding principles

Modularity. A model is understood as a sequence or a graph of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions, regularization schemes are all standalone modules that you can combine to create new models.
Minimalism. Each module should be kept short and simple. Every piece of code should be transparent upon first reading. No black magic: it hurts iteration speed and ability to innovate.
Easy extensibility. New modules are dead simple to add (as new classes and functions), and existing modules provide ample examples. To be able to easily create new modules allows for total expressiveness, making Keras suitable for advanced research.
Work with Python. No separate models configuration files in a declarative format. Models are described in Python code, which is compact, easier to debug, and allows for ease of extensibility.
Getting started: 30 seconds to Keras

The core data structure of Keras is a model, a way to organize layers. The main type of model is the Sequential model, a linear stack of layers. For more complex architectures, you should use the Keras functional API.
python  machinelearing 
october 2016 by HM0880
scikit-learn: machine learning in Python — scikit-learn 0.18 documentation
Machine Learning in Python

Simple and efficient tools for data mining and data analysis
Accessible to everybody, and reusable in various contexts
Built on NumPy, SciPy, and matplotlib
Open source, commercially usable - BSD license


Identifying to which category an object belongs to.
Applications: Spam detection, Image recognition.
Algorithms: SVM, nearest neighbors, random forest, ... Examples

Predicting a continuous-valued attribute associated with an object.
Applications: Drug response, Stock prices.
Algorithms: SVR, ridge regression, Lasso, ... Examples

Automatic grouping of similar objects into sets.
Applications: Customer segmentation, Grouping experiment outcomes
Algorithms: k-Means, spectral clustering, mean-shift, ... Examples
Dimensionality reduction

Reducing the number of random variables to consider.
Applications: Visualization, Increased efficiency
Algorithms: PCA, feature selection, non-negative matrix factorization. Examples
Model selection

Comparing, validating and choosing parameters and models.
Goal: Improved accuracy via parameter tuning
Modules: grid search, cross validation, metrics. Examples

Feature extraction and normalization.
Application: Transforming input data such as text for use with machine learning algorithms.
Modules: preprocessing, feature extraction.
python  machinelearing  review 
october 2016 by HM0880
python-pptx/quickstart.rst at master · scanny/python-pptx · GitHub
A quick way to get started is by trying out some of the examples below to get a feel for how to use |pp|.

The :ref:`API documentation <api>` can help you with the fine details of calling signatures and behaviors.

Hello World! example


from pptx import Presentation

prs = Presentation()
title_slide_layout = prs.slide_layouts[0]
slide = prs.slides.add_slide(title_slide_layout)
title = slide.shapes.title
subtitle = slide.placeholders[1]

title.text = "Hello, World!"
subtitle.text = "python-pptx was here!"'test.pptx')
python  restructuredtext 
october 2016 by HM0880
Anaconda, the Python IDE for Sublime Text 3
Docker Containers

Anaconda can use docker containers environments to lint and complete your code. Some IDE utilities will not work or don’t offer its full features when docker environments are in use, for example, the Goto IDE command wil not work if you try to go to a file that is located in the container (workarounds are provided anyway).

How to run the anaconda’s minserver into a Docker container?
There are so many ways to make your anaconda to connect and use a minserver running in a Docker container. The way to use Docker with anaconda is to use docker run, docker exec or docker-compose manually to start your application environment and then use a regular anaconda’s remote worker using the generic tcp://address:port configuration with whatever directory map that you want or need (remember that directory maps is a common feature for all the anaconda’s remote workers so it is present in tcp:// and vagrant:// python interpreter schemes).

We are gonna present here different ways to connect your anaconda with Docker, some of them make use of docker run, others use docker exec in an already running container (that probably contains your code) and others doesn’t directly use the docker command but docker-compose with a docker-compose.yml file.

Run anaconda’s minserver in it’s own container
If you just need to use the Python interpreter installed in the container you can just run a new container that executes the anaconda’s minserver with the desired interpreter and set the python_interpreter to point with a tcp remote connection to your container.

Run the container

For this example we are gonna use the generic python:2.7 docker image but it will work for whatever docker image that contains a valid Python installation. The command to run our container will look like this:
python  docker 
september 2016 by HM0880
get full path name using list comprehension in python - Stack Overflow
down vote
Personally I'd write it as a generator:

def filetree(top):
for dirpath, dirnames, fnames in os.walk(top):
for fname in fnames:
yield os.path.join(dirpath, fname)
Then you can either use it in a loop:

for name in filetree('/home/user'):
or slurp it into a list:

flist = list(filetree('/home/user'))
august 2016 by HM0880
GitHub - mwaskom/seaborn: Statistical data visualization using matplotlib
Seaborn: statistical data visualization

Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.


Online documentation is available here. It includes a high-level tutorial, detailed API documentation, and other useful info.

There are docs for the development version here. These should more or less correspond with the github master branch, but they're not built automatically and thus may fall out of sync at times.

august 2016 by HM0880
iPython on Chromebook? Don't mind if I do! : chromeos
iPython on Chromebook? Don't mind if I do! self.chromeos
Submitted 1 year ago by Second_FoundationeerAcer C720 | Crouton
So, for all you coders, scientists, engineers, etc. out there, you can use the IPython notebook from your Chromebook via Google Drive-ish by adding coLaboratory which launches an IPython notebook on google drive. You can do things like import matplotlib and numpy, do some data analysis, make some pretty figures, all while being able to share this across your google drive!
20 commentsshare
all 20 comments
sorted by: best
[–]segonius 4 points 1 year ago 
Another way to do this. You can install Anaconda which has a pretty complete scientific stack.
With Developer mode on, open crosh (Ctrl+Alt+t).
Open a shell (crosh> shell)
Decide where you want to put it, make it writable. I put mine in /usr/local. So I had to
$ cd /usr
$ sudo chmod a+rw local
$ cd ~/Downloads
$ bash ./
Start the notebook with
$ ipython notebook --no-browser
Navigate to in a new tab and go to town.
[–]Second_FoundationeerAcer C720 | Crouton[S] 1 point 1 year ago 
I... was not aware of this method for Chromebooks. This is pretty awesome too! coLaboratory was made with scientific collaborations on data analysis or something in mind, so it's meant to be more of a shared thing. I will have to try out this method later today.
[–]brousch -4 points 1 year ago 
Yeah, but developer mode is cheating!
chromebooks  python  jupyter 
august 2016 by HM0880
python - Converting to (not from) ipython Notebook format - Stack Overflow
The IPython API has functions for reading and writing notebook files. You should use this API and not create JSON directly. For example, the following code snippet converts a script into a notebook test.ipynb.

import IPython.nbformat.current as nbf
nb ='', 'r'), 'py')
nbf.write(nb, open('test.ipynb', 'w'), 'ipynb')
Regarding the format of the .py file understood by it is best to simply look into the parser class IPython.nbformat.v3.nbpy.PyReader. The code can be found here (it is not very large):

Edit: This answer was originally written for IPyhton 3. I don't know how to do this properly with IPython 4. Here is an updated version of the link above, pointing to the version of from the IPython 3.2.1 release:

Basically you use special comments such as # <codecell> or # <markdowncell> to separate the individual cells. Look at the line.startswith statements in PyReader.to_notebook for a complete list.
python  jupyter 
august 2016 by HM0880
Saves Jupyter Notebooks as .py and .html files automatically. Add to the file of your associated profile. · GitHub
Saves Jupyter Notebooks as .py and .html files automatically. Add to the file of your associated profile.
import os
from subprocess import check_call

def post_save(model, os_path, contents_manager):
"""post-save hook for converting notebooks to .py and .html files."""
if model['type'] != 'notebook':
return # only do this for notebooks
d, fname = os.path.split(os_path)
check_call(['ipython', 'nbconvert', '--to', 'script', fname], cwd=d)
check_call(['ipython', 'nbconvert', '--to', 'html', fname], cwd=d)

c.FileContentsManager.post_save_hook = post_save
python  jupyter 
august 2016 by HM0880
IPython and Jupyter Notebooks: Automatically Export .py and .html - Dev pro tips
[Updated 2016-03-04 to support Jupyter 4 notebooks – see below.]

IPython notebooks are stored in a format that is not particularly human-readable and doesn’t work well in version control.

One way to solve this problem is to automatically export the code from IPython notebooks into a vanilla Python file after each save.

It’s also useful to automatically generate a HTML file of the notebook on each save. This can be done manually in Jupyter (File > Download as > HTML), but if you always want this, doing it automatically is much easier.

Use the following code to automatically save a .py and a .html file when you save a notebook in Jupyter. These two files will be saved in the same folder as the parent .ipynb file.

First, run ipython locate profile default, which will give you the path to save the following code in.

Save the code below in this folder as


import os
from subprocess import check_call

c = get_config()

def post_save(model, os_path, contents_manager):
"""post-save hook for converting notebooks to .py scripts"""
if model['type'] != 'notebook':
return # only do this for notebooks
d, fname = os.path.split(os_path)
check_call(['ipython', 'nbconvert', '--to', 'script', fname], cwd=d)
check_call(['ipython', 'nbconvert', '--to', 'html', fname], cwd=d)

c.FileContentsManager.post_save_hook = post_save
view hosted with ❤ by GitHub
Now, run ipython notebook. You will see an error message in the terminal if there are any syntax or runtime errors with If everything looks good, go to your web browser, open a notebook, and click the Save/Checkpoint button (it’s the floppy disk icon in the Jupyter toolbar). You should see a .py and a .html file appear alongside your .ipynb file.
python  jupyter 
august 2016 by HM0880
prettyplotlib by olgabot
Python matplotlib-enhancer library which painlessly creates beautiful default matplotlib plots. Inspired by Edward Tufte's work on information design and Cynthia Brewer's work on color perception.

I truly believe that scientific progress is impeded when improper data visualizations are used. I spent a lot of time tweaking my figures to make them more understandable, and realized the scientific world could be a better place if the default parameters for plotting libraries followed recent advances in information design research. And thus prettyplotlib was born.


matplotlib. Can be installed via pip install matplotlib or easy_install matplotlib
brewer2mpl. Can be installed via pip install brewer2mpl or easy_install brewer2mpl
august 2016 by HM0880
GitHub - janschulz/knitpy: knitpy: Elegant, flexible and fast dynamic report generation with python
knitpy: Elegant, flexible and fast dynamic report generation with python

This is a port of knitr ( and rmarkdown ( to python.

To start with, you can run the example overview document. To convert to all defined output formats, run knitpy --to="all" -- examples\knitpy_overview.pymd. This will produce a html, docx and pdf output (if you have pdflatex in path). You can view a markdown rendered and a html rendered version of this file. It's not yet as pretty as the knitr version...

For a description of the code format see and replace {r <r style options>} by {python <python style options>} and of course use python code blocks...

It uses the IPython kernel infrastructure to execute code, so all kernels for IPython are (aem... can potentially be) supported.

What works:

code blocks and inline code
plots are shown inline
knitpy filename.pymd will convert filename filename.pymd to the defaul output format html.
output formats html, pdf and docx. Change with --to=<format>
--to=all will convert to all export formats specified in the yaml header
code chunk arguments eval, results (apart form "hold"), include and echo
errors in code chunks are shown in the document
uses the IPython display framework, so rich output for objects implementing _repr_html_() or _repr_markdown_(). Mimetypes not understood by the final output format are automatically converted via pandoc.
config files: generate an empty one with knitpy --init --profile-dir=.
using it from python (-> your app/ ipython notebook): import knitpy; knitpy.render(filename.pymd, output="html") will convert filename.pymd to filename.html. output=all will convert to all document types (as specified in the YAML header of the document). The call will return a list of converted documents.
debugging with `--debug, --kernel-debug=True, --output-debug=True
What does not work (=everything else :-) ):

most YAML headers are currently ignored
some advertised command-line options are ignored
most code chunk arguments (apart from the ones above) are ignored
probably lots of other stuff...
python  documentation 
august 2016 by HM0880
syntax - What does ** (double star) and * (star) do for Python parameters? - Stack Overflow
The *args and **kwargs is a common idiom to allow arbitrary number of arguments to functions as described in the section more on defining functions in the Python documentation.

The *args will give you all function parameters as a tuple:
july 2016 by HM0880
python - Project Euler 10: find the sum of all the primes below two million - Code Review Stack Exchange
def eratosthenes2(n):
#Declare a set - an unordered collection of unique elements
multiples = set()

#Iterate through [2,2000000]
for i in range(2, n+1):

#If i has not been eliminated already
if i not in multiples:

#Yay prime!
yield i

#Add multiples of the prime in the range to the 'invalid' set
multiples.update(range(i*i, n+1, i))

#Now sum it up
iter = 0
ml = list(eratosthenes2(2000000))
for x in ml:
iter = int(x) + iter

Completed almost before I could get my finger off of the 'run' button.
python  projecteuler  algorithms 
june 2016 by HM0880
Creating Dependency Graphs in Python - Stack Overflow
Usually "dependency" is defined for module / package import.
What you are looking for is a visualizing call flow.
You can still not guarantee that you will not break functionality :)

My experience and solution:

Many a times, I found the call flow data overwhelming and the diagram too complex. So what i usually do is trace call flow partially for the function, I am interested in.

This is done by utilizing the sys.settrace(...) function. After generating the call flows as textual data, I generate a call graph using graphviz.
On call tracing
For generating graphs, use graphviz solutions from networkX.
[Edit: based on comments]

Then my piecemeal solution works better. Just insert the code and use the decorator on a function that you want to trace. You will see gaps where deferred comes into picture but that can be worked out. You will not get the complete picture directly.

I have been trying to do that and made a few post that works on that understanding.
python  workresources 
april 2016 by HM0880
How can I count the occurrences of a list item in Python? - Stack Overflow
If you are using Python 2.7 or 3 and you want number of occurrences for each element:

>>> from collections import Counter
>>> z = ['blue', 'red', 'blue', 'yellow', 'blue', 'red']
>>> Counter(z)
Counter({'blue': 3, 'red': 2, 'yellow': 1})
shareimprove this answer
edited Aug 6 '11 at 18:03

answered Apr 29 '11 at 7:44

+1 for collections, amazingly underused – danodonovan Jun 11 '12 at 13:22
Counter(z).most_common(n) will list elements and counts as tuples in decreasing order, where n is the number of elements to list. Omit n to list everything. – davidjb Apr 2 '14 at 0:24
sometimes scrolling down really pays off. thanks! – multiphrenic Apr 11 '14 at 17:26
If you just want the values and not the keys, do this: Counter(z).values() – Stefan Gruenwald May 25 '14 at 1:13
python  resources 
december 2015 by HM0880
Question to ask during an interview? (from the interviewers perspective) : devops
This is too late, but I would have the guy (in his preferred language) whiteboard out working with some data structures and other programming constructs. The problem still is that most competant people can make it through those exercises, and sometimes people aren't great being put in front of a whiteboard.
One of my gotos for people that say that they are good with Python is talk about threads (open ended in purpose) and see where it goes from there.
I feel like I suck at interviewing though, so take this with a grain of salt.
python  questions_for_interviews 
december 2015 by HM0880
lionheart/ · GitHub
A full-featured Python wrapper (and command-line utility) for the Pinboard API. Built by the makers of Pushpin for Pinboard.
pinboard  python  api 
november 2015 by HM0880
TimeComplexity - Python Wiki
This page documents the time-complexity (aka "Big O" or "Big Oh") of various operations in current CPython. Other Python implementations (or older or still-under development versions of CPython) may have slightly different performance characteristics. However, it is generally safe to assume that they are not slower by more than a factor of O(log n).

Generally, 'n' is the number of elements currently in the container. 'k' is either the value of a parameter or the number of elements in the parameter.


The Average Case assumes parameters generated uniformly at random.

Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move). If you need to add/remove at both ends, consider using a collections.deque instead.
algorithms  python  review 
november 2015 by HM0880
TCP/IP Client and Server - Python Module of the Week
TCP/IP Client and Server
Sockets can be configured to act as a server and listen for incoming messages, or connect to other applications as a client. After both ends of a TCP/IP socket are connected, communication is bi-directional.
october 2015 by HM0880
code golf - Graph Florets of a Flower - Programming Puzzles & Code Golf Stack Exchange
Here's a longer version that produces a cleaner look by removing the grid and axis:

from pylab import *
def florets(n):
for i in arange(0, n, 2.39996):polar(i, sqrt(i), 'o')
grid(0)#turn off grid
xticks([])#turn off angle axis
yticks([])#turn off radius axis
The reason for the different colors is because each point is plotted separately and treated as its own set of data. If the angles and radii were passed as lists, then they would be treated as one set and be of one color.

shareimprove this answer
edited Oct 9 at 16:18

answered Oct 8 at 4:59

I think this is the prettiest answer by far. It's very cool to see the clear spiral patterns in the center. – El'endia Starman Oct 8 at 23:10
You could save a byte by using a normal for loop instead of a list comprehension. It'd have to be on its own line, but ; and \n are the same length, so that doesn't matter. I.e.: from pylab import* - for i in arange(0,input(),2.39996):polar(i,sqrt(i),'o') - show() – marinus Oct 9 at 7:50
@marinus but then its no longer a supercool one liner! But thanks, I've added it in. – Status Oct 9 at 16:20
codegolf  python  review 
october 2015 by HM0880
Python Networking Programming
#!/usr/bin/python # This is file

import socket # Import socket module

s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.bind((host, port)) # Bind to the port

s.listen(5) # Now wait for client connection.
while True:
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
c.close() # Close the connection
A Simple Client
Let us write a very simple client program which opens a connection to a given port 12345 and given host. This is very simple to create a socket client using Python's socket module function.

The socket.connect(hosname, port ) opens a TCP connection to hostname on the port. Once you have a socket open, you can read from it like any IO object. When done, remember to close it, as you would close a file.

The following code is a very simple client that connects to a given host and port, reads any available data from the socket, and then exits −

#!/usr/bin/python # This is file

import socket # Import socket module

s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.

s.connect((host, port))
print s.recv(1024)
s.close # Close the socket when done
Now run this in background and then run above to see the result.
october 2015 by HM0880
numpy/HOWTO_DOCUMENT.rst.txt at master · numpy/numpy
A documentation string (docstring) is a string that describes a module, function, class, or method definition. The docstring is a special attribute of the object (object.__doc__) and, for consistency, is surrounded by triple double quotes, i.e.:

"""This is the form of a docstring.

It can be spread over several lines.

NumPy, SciPy, and the scikits follow a common convention for docstrings that provides for consistency, while also allowing our toolchain to produce well-formatted reference guides. This document describes the current community consensus for such a standard. If you have suggestions for improvements, post them on the numpy-discussion list.

Our docstring standard uses re-structured text (reST) syntax and is rendered using Sphinx (a pre-processor that understands the particular documentation style we are using). While a rich set of markup is available, we limit ourselves to a very basic subset, in order to provide docstrings that are easy to read on text-only terminals.

A guiding principle is that human readers of the text are given precedence over contorting docstrings so our tools produce nice output. Rather than sacrificing the readability of the docstrings, we have written pre-processors to assist Sphinx in its task.

The length of docstring lines should be kept to 75 characters to facilitate reading the docstrings in text terminals.
python  sphinx  documentation 
october 2015 by HM0880
installing cx_Freeze to python at windows -
I faced a similar problem (Python 3.4 32-bit, on Windows 7 64-bit). After installation of cx_freeze, three files appeared in

These files have no file extensions, but appear to be Python scripts. When you run
python.exe cxfreeze-postinstall
from the command prompt, two batch files are being created in the Python scripts directory:

From that moment on, you should be able to run cx_freeze.

cx_freeze was installed using the provided win32 installer (
). Installing it using pip gave exactly the same result.
python  resources 
september 2015 by HM0880
Python String Format Cookbook – – all about that code
Every time I use Python's string formatter, version 2.7 and up, I get it wrong and for the life of me I can't figure out their documentation format. I got very used to the older % method. So I started to create my own string format cookbook. Let me know in the comments of any other good example to include.

String Formatting Cookbook
Number Formatting
The following table shows various ways to format numbers using python's newish str.format(), examples for both float formatting and integers.

To run examples use print("FORMAT".format(NUMBER)); So to get the output of the first example, you would run: print("{:.2f}".format(3.1415926));

Number Format Output Description
3.1415926 {:.2f} 3.14 2 decimal places
3.1415926 {:+.2f} +3.14 2 decimal places with sign
-1 {:+.2f} -1.00 2 decimal places with sign
2.71828 {:.0f} 3 No decimal places
5 {:0>2d} 05 Pad number with zeros (left padding, width 2)
5 {:x<4d} 5xxx Pad number with x's (right padding, width 4)
10 {:x<4d} 10xx Pad number with x's (right padding, width 4)
1000000 {:,} 1,000,000 Number format with comma separator
0.25 {:.2%} 25.00% Format percentage
1000000000 {:.2e} 1.00e+09 Exponent notation
13 {:10d}         13 Right aligned (default, width 10)
13 {:<10d} 13 Left aligned (width 10)
13 {:^10d}     13 Center aligned (width 10)
string.format() basics
Here are a couple of example of basic string substitution, the {} is the placeholder for the substituted variables. If no format is specified, it will insert and format as a string.

s1 = "so much depends upon {}".format("a red wheel barrow")
s2 = "glazed with {} water beside the {} chickens".format("rain", "white")
You can also use the numeric position of the variables and change them in the strings, this gives some flexibility when doing the formatting, if you made a mistake in the order you can easily correct without shuffling all variables around.

s1 = " {0} is better than {1} ".format("emacs", "vim")
s2 = " {1} is better than {0} ".format("emacs", "vim")
python  resources 
september 2015 by HM0880
N-D labeled arrays and datasets in Python — xray 0.6.0 documentation
xray is an open source project and Python package that aims to bring the labeled data power of pandas to the physical sciences, by providing N-dimensional variants of the core pandas data structures.

Our goal is to provide a pandas-like and pandas-compatible toolkit for analytics on multi-dimensional arrays, rather than the tabular data for which pandas excels. Our approach adopts the Common Data Model for self- describing scientific data in widespread use in the Earth sciences: xray.Dataset is an in-memory representation of a netCDF file.
september 2015 by HM0880
Werkzeug 0.10.4 : Python Package Index
The Swiss Army knife of Python web development

Werkzeug started as simple collection of various utilities for WSGI applications and has become one of the most advanced WSGI utility modules. It includes a powerful debugger, full featured request and response objects, HTTP utilities to handle entity tags, cache control headers, HTTP dates, cookie handling, file uploads, a powerful URL routing system and a bunch of community contributed addon modules.

Werkzeug is unicode aware and doesn’t enforce a specific template engine, database adapter or anything else. It doesn’t even enforce a specific way of handling requests and leaves all that up to the developer. It’s most useful for end user applications which should work on as many server environments as possible (such as blogs, wikis, bulletin boards, etc.).

Details and example applications are available on the Werkzeug website.
september 2015 by HM0880
Python portable, linux & windows - Stack Overflow
You can install two python's. Download Anaconda from website for linux and windows. Install them (on win and lin machines) and then create two environments on your USB using the conda package manager:

# Windows
conda create -p E:\pywin python all other packages you want
# Linux
conda create -p /mnt/usb/pylin python all other packages you want
Then use the pywin environment on windows and pylin on linux.

# Windows
# Linux
With conda you will be able to maintain the same packages in both environments so you'll have everything you need on both systems...

Or you can install the Anaconda directly to the USB, but that will require more space...

shareimprove this answer
edited Oct 4 '13 at 15:31

answered Oct 3 '13 at 23:38

Viktor Kerkez
I can't can't install python any of the school computers. Like I said will only be able to run python from the usb. Does this still work in my scenario? –  user2676813 Oct 3 '13 at 23:41
@Gecko: You will need to borrow a Windows machine from somewhere—or set up a Windows virtual machine on your linux box—in order to create the Windows environment for your USB drive. After that, it will work on machines that don't have Python installed on it. –  abarnert Oct 3 '13 at 23:51
@Gecko You can also install anaconda directly to usb. But it will take more space... –  Viktor Kerkez Oct 4 '13 at 15:30
python  portableapps 
september 2015 by HM0880
Python. Client side.
Skulpt is an entirely in-browser implementation of Python.

No preprocessing, plugins, or server-side support required, just write Python and reload.

The code is run entirely in your browser, so don't feel obligated to "crash the server", you'll only stub your toe. Help, or examples: 1 2 3 4 5 6 7 8. Ctrl-Enter to run.
september 2015 by HM0880
birkenfeld / sphinx / source / sphinx / — Bitbucket
# -*- coding: utf-8 -*-

Quickly setup documentation source to work with Sphinx.

:copyright: Copyright 2007-2014 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
python  sphinx  sourcecode  review 
september 2015 by HM0880
matplotlib colormaps
Update: These colormaps have been merged into the development version of Matplotlib, all of them will be included in matplotlib 1.5, and "option D" (now called "viridis") will be the new default colormap in matplotlib 2.0. Third parties have also made it available in R and Matlab. Below is the talk presented at SciPy2015 that outlines the whole story.
python  matplotlib 
august 2015 by HM0880
Tech Stuff - wxWidgets Survival Guide
Tech Stuff - wxWidgets Survival Guide

wxWidgets are just great. Features by the ton, free and cross platform. Lovely stuff. However, wxWidgets is a massive system. And the problem with massive systems is just how do you give each of a wide variety of users, with wildly varying needs and knowledge levels, what they want. It ain't easy. If wxWidgets is a really big deal for you then invest the time and read the project documentation written by the people who know what they are talking about. If you are an intermittent user, need a quick fix for a wxWidgets problem or want to enhance existing code - these notes may help. There again, they may not.

Being C people, we had also, mercifully, forgotten what an anally retentive language C++ is. Sigh.

Note: All information relates to wxWidgets 2.8.10 and may, or may not, be relevant to later versions.

wxWidgets has evolved its own wxSpeak. Undoubtedly for excellent reasons. No sense of trying to confuse the rest of us non-experts. Here is a translation table that may help:

wxWidgets Rest of Us Notes
Frame (wxFrame) Window The big thing that has a title bar at the top! A frame is also a window.
Window(wxWindow) Control Stuff like buttons, text boxes are all called windows in wxSpeak though they will also have a specific type. Thus a normal clickable button has a class of wxButton, will be called a control in some wxWidgets documents but is a window (has wxWindow properties).
Panel(wxPanel) - You need one of these, which is typically the same size as the Frame, to act as a container for all your windows/controls. So create frame, create panel in frame, add windows (controls) to panel.
colour color The original work on wxWidgets was done by Brits (to be pecise, a fellow Scot). However, Brits (and even Scots) have a regrettable habit of spelling color as colour (they may say the reverse about North Americans). Some options work with both spellings, others not. If your friendly local compiler tells you it's not a class member try the Brit spelling. Or, if you have a lot of trouble - add a #define.
august 2015 by HM0880
Reading Excel with Python (xlrd) | programming notes
great cookbook for reading Excel files with Python


Every 6-8 months, when I need to use the python xlrd library, I end up re-finding this page:

Examples Reading Excel (.xls) Documents Using Python’s xlrd
In this case, I’ve finally bookmarked it:)

from __future__ import print_function
from os.path import join, dirname, abspath
import xlrd

fname = join(dirname(dirname(abspath(__file__))), 'test_data', 'Cad Data Mar 2014.xlsx')

# Open the workbook
xl_workbook = xlrd.open_workbook(fname)

# List sheet names, and pull a sheet by name
sheet_names = xl_workbook.sheet_names()
print('Sheet Names', sheet_names)

xl_sheet = xl_workbook.sheet_by_name(sheet_names[0])

# Or grab the first sheet by index
# (sheets are zero-indexed)
xl_sheet = xl_workbook.sheet_by_index(0)
print ('Sheet name: %s' %

# Pull the first row by index
# (rows/columns are also zero-indexed)
row = xl_sheet.row(0) # 1st row

# Print 1st row values and types
from xlrd.sheet import ctype_text

print('(Column #) type:value')
for idx, cell_obj in enumerate(row):
cell_type_str = ctype_text.get(cell_obj.ctype, 'unknown type')
print('(%s) %s %s' % (idx, cell_type_str, cell_obj.value))

# Print all values, iterating through rows and columns
num_cols = xl_sheet.ncols # Number of columns
for row_idx in range(0, xl_sheet.nrows): # Iterate through rows
print ('-'*40)
print ('Row: %s' % row_idx) # Print row number
for col_idx in range(0, num_cols): # Iterate through columns
cell_obj = xl_sheet.cell(row_idx, col_idx) # Get cell object by row, col
print ('Column: [%s] cell_obj: [%s]' % (col_idx, cell_obj))
python  resources 
august 2015 by HM0880
Learning Seattle's Work Habits from Bicycle Counts (Updated!)
discrete colorbar code in this article


Last year I wrote a blog post examining trends in Seattle bicycling and how they relate to weather, daylight, day of the week, and other factors.

Here I want to revisit the same data from a different perspective: rather than making assumptions in order to build models that might describe the data, I'll instead wipe the slate clean and ask what information we can extract from the data themselves, without reliance on any model assumptions. In other words, where the previous post examined the data using a supervised machine learning approach for data modeling, this post will examine the data using an unsupervised learning approach for data exploration.

Along the way, we'll see some examples of importing, transforming, visualizing, and analyzing data in the Python language, using mostly Pandas, Matplotlib, and Scikit-learn. We will also see some real-world examples of the use of unsupervised machine learning algorithms, such as Principal Component Analysis and Gaussian Mixture Models, in exploring and extracting meaning from data.

To spoil the punchline (and perhaps whet your appetite) what we will find is that from analysis of bicycle counts alone, we can make some definite statements about the aggregate work habits of Seattleites who commute by bicycle.
python  review 
august 2015 by HM0880
Optimizing Python in the Real World: NumPy, Numba, and the NUFFT
Donald Knuth famously quipped that "premature optimization is the root of all evil." The reasons are straightforward: optimized code tends to be much more difficult to read and debug than simpler implementations of the same algorithm, and optimizing too early leads to greater costs down the road. In the Python world, there is another cost to optimization: optimized code often is written in a compiled language like Fortran or C, and this leads to barriers to its development, use, and deployment.

Too often, tutorials about optimizing Python use trivial or toy examples which may not map well to the real world. I've certainly been guilty of this myself. Here, I'm going to take a different route: in this post I will outline the process of understanding, implementing, and optimizing a non-trivial algorithm in Python, in this case the Non-uniform Fast Fourier Transform (NUFFT). Along the way, we'll dig into the process of optimizing Python code, and see how a relatively straightforward pure Python implementation, with a little help from Numba, can be made to nearly match the performance of a highly-optimized Fortran implementation of the same algorithm.
python  coding  review 
august 2015 by HM0880
The Hipster Effect: An IPython Interactive Exploration
This week I started seeing references all over the internet to this paper: The Hipster Effect: When Anticonformists All Look The Same. It essentially describes a simple mathematical model which models conformity and non-conformity among a mutually interacting population, and finds some interesting results: namely, conformity among a population of self-conscious non-conformists is similar to a phase transition in a time-delayed thermodynamic system. In other words, with enough hipsters around responding to delayed fashion trends, a plethora of facial hair and fixed gear bikes is a natural result.

Also naturally, upon reading the paper I wanted to try to reproduce the work. The paper solves the problem analytically for a continuous system and shows the precise values of certain phase transitions within the long-term limit of the postulated system. Though such theoretical derivations are useful, I often find it more intuitive to simulate systems like this in a more approximate manner to gain hands-on understanding. By the end of this notebook, we'll be using IPython's incredible interactive widgets to explore how the inputs to this model affect the results.

Mathematically Modeling Hipsters

We'll start by defining the problem, and going through the notation suggested in the paper. We'll consider a group of N people, and define the following quantities:

ϵi : this value is either +1 or −1. +1 means person i is a hipster, while −1 means they're a conformist.
si(t) : this is also either +1 or −1. This indicates person i's choice of style at time t. For example, +1 might indicated a bushy beard, while −1 indicates clean-shaven.
Jij : The influence matrix. This is a value greater than zero which indicates how much person j influences person i.
τij : The delay matrix. This is an integer telling us the length of delay for the style of person j to affect the style of person i.
python  ipython 
august 2015 by HM0880
Alexey Kachayev :: Talks
Talks and slides

Efficient, Concurrent and Concise Data Access | Video | 26.06.2015, EuroClojure 2015, Barcelona

Microservices in Clojure. Lessons learned | 16.04.2015, Kyiv Clojure Meetup #7

Errors Handling with core.async | 22.10.2014, Kyiv Clojure Meetup #4

Go: Channels Are Not Enough | 14.09.2014, Infinit loop '14

Parsing CSS File with Monadic Parser in Clojure | 30.08.2014, Perpetual motion '14

Deterministic Parallel and Distributed Programming with Clojure. Quick Intro | 03.07.2014, KievFProg #7

Monadic Parsing in Python | 07.06.2014, KyivPy #12 Persistent Data Structures | 22.03.2014, #10

Erlang in Production. Lessons learned (social application platform development with Erlang and Riak) | 16.11.2013, KyivFProg #11

Union-based Heaps in Haskell and Python | 26.10.2013, KyivPy #11

Channels & Concurrency: Go, Clojure, Erlang, Scala, Haskell | 03.08.2013, KievFProg

[Live coding] Real-time collaboration with Erlang and Websockets | Repo | History | 01.06.2013, HotCode 2013

Streams as fundumental abstraction | 01.06.2013, HotCode 2013

Web, Concurrency & Functional Programming | 31.05.2013, HotCode 2013

Functional programming for Web | PDF | 24.04.2013, iForum 2013

Stop Coding Pascal | PDF | 06.04.2013, KyivPy #10 ideas and internals | PDF | 27.03.2013, Kyiv FProg

Modern Concurrency: Erlang, Scala, Go, Clojure | PDF | 26.01.2013, KharkivPy #0 enjoy FP in Python | 19.01.2013, KyivPy #9

Lazy evaluation and declarative approach in Python | PDF | 08.12.2012, KharkivPy #6

Functional programming with Python | PDF | 20-21.10.2012, Kyiv, PyCon UA 2012
python  talks 
august 2015 by HM0880
Which one should I start first: Ruby on Rails or Django? - Quora
The biggest differences between Django and Rails are in culture. These are broad generalizations, but if you find yourself aligning more with one or the other, that may be a good indicator of which one you should learn first.

Rails (and Ruby by extension)
More into the expressiveness of the code and writing code that is clever. Loves the "magic" built in to the framework that makes things "just work" and makes writing applications fun. More concerned with the emotional and visual sides of user experience (how does the app feel to use?). Tend to be more in to design and are more likely to have other artistic outlets in their lives like creative writing, art, or music. Rails guys want to be rock stars and make the world a better place by building applications that are bad ass and fun to use.

Django (and Python by extension)
More into the functionality and practicality of the framework. Likes things to be structured, consistent, and simple. Any creativity expressed in code should be in the way the problem was solved, not in the way the programming language itself was used. Values readability of code over being surprised by how clever or compact a line of code was. More concerned with the usability and simplicity sides of user experience. Django guys get their thrills out of solving problems quickly and efficiently while staying within the lines of the style guidelines provided by the Python community.


But, just to throw a curveball into the discussion, I would say that as much as I hate to say it, the best future is in JavaScript. You can now build full stack applications with JavaScript alone. If I were starting today, I would probably spend my time building full stack JavaScript apps (Node, Mongo, Express, AngularJS, Meteor, etc. etc.)
python  ruby 
august 2015 by HM0880
Battle of the Frameworks: Django Vs. Rails | SkilledUp
How to Decide?
If you are wondering which one is best for you, you might find it helpful to first answer this simple question. What are your future plans for learning Django or Rails? Some people are just interested in learning web development, while others might want to further develop their programming capabilities.

Cementing a foundation in either language is easy but the majority of the current Ruby community is built up around Rails. While Ruby is a scripting language that is capable of doing everything from GUI creation to image recognition, most Ruby developers find employment as web developers.

Python, on the other hand, has a community that is expanding the use of the language across many fields. It’s found a strong footing in the realm of data science. This has resulted in more professional applications of the language.

In New York, there are currently four times more Rails jobs available than Django. However, there are more than twice as many Python jobs than there are Ruby. If you are looking to develop a career solely in Web development than Ruby on Rails might be the best choice. However, if you are looking to build a base in a language that has professional applications across the board than Django and Python might be your best choice.
python  ruby 
august 2015 by HM0880
SQLAlchemy - The Database Toolkit for Python
The Python SQL Toolkit and Object Relational Mapper

SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL.

It provides a full suite of well known enterprise-level persistence patterns, designed for efficient and high-performing database access, adapted into a simple and Pythonic domain language.


SQL databases behave less like object collections the more size and performance start to matter; object collections behave less like tables and rows the more abstraction starts to matter. SQLAlchemy aims to accommodate both of these principles.

SQLAlchemy considers the database to be a relational algebra engine, not just a collection of tables. Rows can be selected from not only tables but also joins and other select statements; any of these units can be composed into a larger structure. SQLAlchemy's expression language builds on this concept from its core.

SQLAlchemy is most famous for its object-relational mapper (ORM), an optional component that provides the data mapper pattern, where classes can be mapped to the database in open ended, multiple ways - allowing the object model and database schema to develop in a cleanly decoupled way from the beginning.

The main goal of SQLAlchemy is to change the way you think about databases and SQL!
workdatabases  python 
july 2015 by HM0880
flake8 2.4.1 : Python Package Index
the modular source code checker: pep8, pyflakes and co

Flake8 is a wrapper around these tools:

Ned Batchelder’s McCabe script
Flake8 runs all the tools by launching the single flake8 script. It displays the warnings in a per-file, merged output.

It also adds a few features:

files that contain this line are skipped:

# flake8: noqa
lines that contain a # noqa comment at the end will not issue warnings.

a Git and a Mercurial hook.

a McCabe complexity checker.

extendable through flake8.extension entry points.
python  review 
july 2015 by HM0880
code golf - Counting from 1 to an Integer - Programming Puzzles & Code Golf Stack Exchange
Python 2, 48 * 0.8 = 38.4

i=0;exec"i+=1;print format(i,'032b'),i;"*input()
Converts a number to binary, uses string formatting to convert it to binary with 32 digits, and then also prints the decimal number for the bonus. Uses an exec loop to increment from 1 to the input value.
python  codegolf  Matasano 
july 2015 by HM0880
« earlier      
per page:    204080120160

Copy this bookmark: