**nhaliday + heavyweights + deep-learning**
1

He briefly showed a demo where, given values of a polynomial, a machine can put together a few lines of code that successfully computes the polynomial. But the code looks weird to a human eye. To compute some quadratic, it nests for-loops and adds things up in a funny way that ends up giving the right output. So has it really ”learned” the polynomial? I think in computer science, you typically feel you’ve learned a function if you can accurately predict its value on a given input. For an algebraist like me, a function determines but isn’t determined by the values it takes; to me, there’s something about that quadratic polynomial the machine has failed to grasp. I don’t think there’s a right or wrong answer here, just a cultural difference to be aware of. Relevant: Norvig’s description of “the two cultures” at the end of this long post on natural language processing (which is interesting all the way through!)

mathtariat
org:bleg
nibble
tech
ai
talks
summary
philosophy
lens
comparison
math
cs
tcs
polynomials
nlp
debugging
psychology
cog-psych
complex-systems
deep-learning
analogy
legibility
interpretability
composition-decomposition
coupling-cohesion
apollonian-dionysian
heavyweights
march 2017 by nhaliday

**related tags**

Copy this bookmark: