rachel   940

« earlier    

DONALD HARRIS SLAMS HIS DAUGHTER SENATOR KAMALA HARRIS FOR FRAUDULENTLY STEREOTYPING JAMAICANS AND ACCUSES HER OF PLAYING IDENTITY POLITICS - Jamaica Global Online
Professor Donald Harris Kamala Harris’ Jamaican father, has vigorously dissociated himself from statements made on the New York Breakfast Club radio show
rachel  lol 
8 weeks ago by alexpbrown
Learning Math for Machine Learning
Learning Math for Machine Learning
Vincent Chen
Vincent Chen is a student at Stanford University studying Computer Science. He is also a Research Assistant at the Stanford AI Lab.
It’s not entirely clear what level of mathematics is necessary to get started in machine learning, especially for those who didn’t study math or statistics in school.
In this piece, my goal is to suggest the mathematical background necessary to build products or conduct academic research in machine learning. These suggestions are derived from conversations with machine learning engineers, researchers, and educators, as well as my own experiences in both machine learning research and industry roles.
To frame the math prerequisites, I first propose different mindsets and strategies for approaching your math education outside of traditional classroom settings. Then, I outline the specific backgrounds necessary for different kinds of machine learning work, as these subjects range from high school-level statistics and calculus to the latest developments in probabilistic graphical models (PGMs). By the end of the post, my hope is that you’ll have a sense of the math education you’ll need to be effective in your machine learning work, whatever that may be!
To preface the piece, I acknowledge that learning styles/frameworks/resources are unique to a learner’s personal needs/goals— your opinions would be appreciated in the discussion on HN!
A Note on Math Anxiety
It turns out that a lot of people — including engineers — are scared of math. To begin, I want to address the myth of “being good at math.”
The truth is, people who are good at math have lots of practice doing math. As a result, they’re comfortable being stuck while doing math. A student’s mindset, as opposed to innate ability, is the primary predictor of one’s ability to learn math (as shown by recent studies).
To be clear, it will take time and effort to achieve this state of comfort, but it’s certainly not something you’re born with. The rest of this post will help you figure out what level of mathematical foundation you need and outline strategies for building it.
Getting Started
As soft prerequisites, we assume basic comfortability with linear algebra/matrix calculus (so you don’t get stuck on notation) and introductory probability. We also encourage basic programming competency, which we support as a tool to learn math in context. Afterwards, you can fine-tune your focus based on the kind of work you’re excited about.
How to Learn Math Outside of School I believe the best way to learn math is as a full-time job (i.e. as a student). Outside of that environment, it’s likely that you won’t have the structure, (positive) peer pressure, and resources available in the academic classroom.
To learn math outside of school, I’d recommend study groups or lunch and learn seminars as great resources for committed study. In research labs, this might come in the form of a reading group. Structure-wise, your group might walk through textbook chapters and discuss lectures on a regular basis while dedicating a Slack channel to asynchronous Q&A.
Culture plays a large role here — this kind of “additional” study should be encouraged and incentivized by management so that it doesn’t feel like it takes away from day-to-day deliverables. In fact, investing in peer-driven learning environments can make your long-term work more effective, despite short-term costs in time.
Math and Code
Math and code are highly intertwined in machine learning workflows. Code is often built directly from mathematical intuition, and it even shares the syntax of mathematical notation. In fact, modern data science frameworks (e.g. NumPy) make it intuitive and efficient to translate mathematical operations (e.g. matrix/vector products) to readable code.
I encourage you to embrace code as a way to solidify your learning. Both math and code depend on precision in understanding and notation. For instance, practicing the manual implementation of loss functions or optimization algorithms can be a great way to truly understanding the underlying concepts.
As an example of learning math through code, let’s consider a practical example: implementing backpropagation for the ReLU activation in your neural network (yes, even if Tensorflow/PyTorch can do this for you!). As a brief primer, backpropagation is a technique that relies on the chain rule from calculus to efficiently compute gradients. To utilize the chain rule in this setting, we multiply upstream derivatives by the gradient of ReLU.
To begin, we visualize the ReLU activation, defined:
To compute the gradient (intuitively, the slope), you might visualize a piecewise function, denoted by the indicator function as follows:
NumPy lends us helpful, intuitive syntax here— our activation function (blue curve) is interpretable in code, where x is our input and relu is our output:
relu = np.maximum(x, 0)
The gradient (red curve) follows, where grad describes the upstream gradient:
grad[x < 0] = 0
Without first deriving the gradient yourself, this line of code might not be self-explanatory. In our line of code, set set all values in the upstream gradient (grad) to 0 for all elements that satisfy the condition, [h<0]. Mathematically, this is effectively equivalent to the piecewise-representation of ReLU’s gradient, which squashes all values less than 0 to 0 when multiplying by an upstream gradient!
As we see here, walking through our basic understanding of calculus gives us a clear way to think through the code. The full example of this neural network implementation can be found here.
Math for Building Machine Learning Products
To inform this section, I spoke to machine learning engineers to figure out where math was most helpful in debugging their systems. The following are examples of questions that engineers found themselves answering with mathematical insights. If you haven’t seen them, no worries— the hope is that this section will provide some context into specific kinds of questions you might find yourself answering!
What clustering method should I use to visualize my high-dimensional customer data?
○ Approach: PCA vs. tSNE
How should I calibrate a threshold (e.g. confidence-level 0.9 vs. 0.8?) for “blocking” fraudulent user transactions?
○ Approach: Probability calibration
What’s the best way to characterize the bias of my satellite data to specific regions of the world (Silicon Valley vs. Alaska)?
○ Approach: Open research question. Perhaps, aim for demographic parity?
Generally, statistics and linear algebra can be employed in some way for each of these questions. However, to arrive at satisfactory answers often requires a domain-specific approach. If that’s the case, how do you narrow down the kind of math you need to learn?
Define Your System
There is no shortage of resources (e.g. scikit-learn for data analysis, keras for deep learning) that will help you jump into writing code to model your systems. In doing so, try to answer the following questions about the pipeline you need to build:
What are the inputs/outputs of your system?
How should you prepare your data to fit your system?
How can you construct features or curate data to help your model generalize?
How do you define a reasonable objective for your problem?
You’d be surprised — defining your system can be hard! Afterwards, the engineering required for pipeline-building is also non-trivial. In other words, building machine learning products requires significant amounts of heavy lifting that don’t require a deep mathematical background.
Resources
• Best Practices for ML Engineering by Martin Zinkevich, Research Scientist at Google
Learning Math as You Need It
Diving headfirst into a machine learning workflow, you might find that there are some steps that you get stuck at, especially while debugging. When you’re stuck, do you know what to look up? How reasonable are your weights? Why isn’t your model converging with a particular loss definition? What’s the right way to measure success? At this point, it may be helpful to make assumptions about the data, constrain your optimization differently, or try different algorithms.
Often, you’ll find that there’s mathematical intuition baked into the modeling/debugging process (e.g. selecting loss functions or evaluation metrics) that could be instrumental to making informed, engineering decisions. These are your opportunities to learn!
Rachel Thomas from Fast.ai is a proponent of this “on-demand” method — while educating students, she found that it was more important for her deep learning students to get far enough to become excited about the material. Afterwards, their math education involved filling in the holes, on-demand.
Resources
• Course: Computational Linear Algebra by fast.ai
• YouTube: 3blue1brown: Essence of Linear Algebra and Calculus
• Textbook: Linear Algebra Done Right by Axler
• Textbook: Elements of Statistical Learning by Tibshirani et al.
• Course: Stanford’s CS229 (Machine Learning) Course Notes
Math for Machine Learning Research
I now want to characterize the type of mathematical mindset that is useful for research-oriented work in machine learning. The cynical view of machine learning research points to plug-and-play systems where more compute is thrown at models to squeeze out higher performance. In some circles, researchers remain skeptical that empirical methods lacking in mathematical rigor (e.g. certain deep learning methods) can carry us to the holy grail of human-level intelligence.
It’s concerning that the research world might be building on existing systems and assumptions that don’t extend our fundamental understanding of the field. Researchers need to contribute primitives— new, foundational building blocks that can be used to derive entirely new insights and approaches to goals in the field. For instance, this might mean rethinking building blocks like Convolutional Neural Networks for image classification, as Geoff Hinton, “… [more]
Thanks  to  Ambika  Acharya_  Janice  Lan_  Winnie  Lin_  Michael  Nielsen_  Rachel  Thomas_  Lisa  Wang_  for  taking  the  time  to  chat  about  math  in  engineering_  education_  and  research  roles_  and  to  Remi  Cadene_  Craig  Cannon_  Adithya  Ganesh_  Janice  Lan_  Addison  Leong_  Ranjay  Krishna_  and  Paroma  Varma_  for  feedback  on  drafts.  from iphone
august 2018 by kgiverson

« earlier