arXiv:1606.01885 June 2016 arXiv:1703.00441 March 2017 • This domain is prone to overfiKng and underfiKng. • If we want to do well on a single objec/ve func/on: – Consider an algorithm that memorizes the op/mum. – This is the best op/mizer since it gets to the op/mum in one step. • If we want to do well on all objec/ve func/ons: – Given any op/mizer, we can always construct an objec/ve func/on that it performs poorly on. • Goal: Do well on a class of objec/ve func/ons with similar geometry, e.g.: – Logis/c regression loss func/ons – Neural net classifica/on loss func/ons • Op/miza/on problems are ubiquitous in science and engineering. • Devising a new op/miza/on algorithm manually is challenging. Is there a beWer way? • If the mantra of machine learning is to learn what is tradi/onally manually designed… Why not learn the op?miza?on algorithm itself? • The predic/on of the neural net at any point in /me affects the inputs that it sees in the future. • This violates the i.i.d. assump/on in supervised learning. • Compounding errors: A policy trained using supervised learning does not know how to recover from previous mistakes. • A supervised learner that makes a mistake with probability incurs a cumula/ve error of , rather than . (Ross and Bagnell, 2010) Proper?es of the Learning Problem • Given: a set of training objec/ve func/ons , a distribu/on for ini/alizing the iterate and a metaloss that measures the quality of the iterates . • An op/miza/on algorithm takes an objec/ve func/on and an ini/al iterate as input and produces a sequence of iterates . • Goal: learn such that is minimized. • We choose Formula?on Ke Li Jitendra Malik {ke.li,malik}@eecs.berkeley.edu Learning to Op?mize Introduc?on SeLng Future Work Challenges f 1 ,...,f n ⇠ F D L(f,x (1) ,...,x (T ) ) x (1) ,...,x (T ) A f x (0) x (1) ,...,x (T ) A ⇤ E f ⇠F ,x (0) ⇠D h L(f, A ⇤ (f,x (0) )) i L(f,x (1) ,...,x (T ) )= T X i=1 f (x (i) ) • Input: Recent history of iterates, gradients and objec/ve values • Output: Step vector • Searching over the space of op/miza/on algorithms reduces to learning the parameters of the neural net. Parameterizing Op?miza?on Algorithms Gradient Descent Momentum Learned Algorithm Neural Net φ(·)= -γ 0 @ i-1 X j =0 ↵ i-1-j rf (x (j ) ) 1 A φ(·)= φ(·)= -γ rf (x (i-1) ) O(✏T 2 ) ✏ O(✏T ) • The goal of RL is to find: where the expecta/on is taken w.r.t. • The method we use is Guided Policy Search (Levine and Abbeel, 2014), which alternates between compu/ng target trajectories and training the policy to replicate them. More precisely, it solves: Reinforcement Learning ⇡ ⇤ = arg min ⇡ E s 0 ,a 0 ,s 1 ,...,s T " T X t=0 c(s t ) # q (s 0 ,a 0 ,s 1 ,...,s T )= p 0 (s 0 ) T -1 Y t=0 ⇡ ( a t | s t ,t) p ( s t+1 | s t ,a t ) Cost State Ac/on Ini/al State Distribu/on Policy Dynamics Time Horizon Φ(·) State f (x (i) ) Ac/on Policy Cost Experiments • We trained op/mizers for the following classes of lowdimensional op/miza/on problems: – Logis/c Regression (Convex) – Robust Linear Regression (Nonconvex) – Small Neural Net Classifier (Nonconvex) • Trained on a set of random problems. • Tested on a different set of random problems. Logis+c Regression: Robust Linear Regression: Small Neural Net: Wider Architecture: Noisier Gradients: Wider Architecture and Noisier Gradients: Wider Architecture and Longer Time Horizon: Credit: John Schulman min ✓,⌘ E " T X t=0 c(s t ) # s.t. ( a t | s t ,t; ⌘ )= ⇡ ( a t | s t ; ✓ ) 8a t ,s t ,t Learning to Op?mize Neural Nets (hWps:// arxiv.org /abs/1703.00441 ) • Trained op/mizer on the experience of training neural net on MNIST (a single objec/ve func/on). • Tested it on the problems of training a neural net on Toronto Faces, CIFAR10 and CIFAR100. Δx
1
Embed
THIS SIDEBAR DOES NOT PRINT—) LearningtoOpmize( QUICK ...ke.li/papers/lto_iclr17_poster.pdfThis PowerPoint 2007 template produces a 42”x90” presentation poster. You can use it
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
(—THIS SIDEBAR DOES NOT PRINT—) D E S I G N G U I D E
This PowerPoint 2007 template produces a 42”x90” presentation poster. You can use it to create your research poster and save valuable time placing titles, subtitles, text, and graphics.
We provide a series of online answer your poster production questions. To view our template tutorials, go online to PosterPresentations.com and click on HELP DESK.
When you are ready to print your poster, go online to PosterPresentations.com
Need assistance? Call us at 1.510.649.3001
Q U I C K S T A R T
Zoom in and out As you work on your poster zoom in and out to the level that is more comfortable to you. Go to VIEW > ZOOM.
Title, Authors, and Affiliations
Start designing your poster by adding the title, the names of the authors, and the affiliated institutions. You can type or paste text into the provided boxes. The template will automatically adjust the size of your text to fit the title box. You can manually override this feature and change the size of your text.
T I P : The font size of your title should be bigger than your name(s) and institution name(s).
Adding Logos / Seals Most often, logos are added on each side of the title. You can insert a logo by dragging and dropping it from your desktop, copy and paste or by going to INSERT > PICTURES. Logos taken from web sites are likely to be low quality when printed. Zoom it at 100% to see what the logo will look like on the final poster and make any necessary adjustments.
T I P : See if your company’s logo is available on our free poster templates page.
Photographs / Graphics You can add images by dragging and dropping from your desktop, copy and paste, or by going to INSERT > PICTURES. Resize images proportionally by holding down the SHIFT key and dragging one of the corner handles. For a professional-looking poster, do not distort your images by enlarging them disproportionally.
Image Quality Check Zoom in and look at your images at 100% magnification. If they look good they will print well.
ORIGINAL DISTORTED
Corner handles
Good
prin
/ng qu
ality
Bad prin/n
g qu
ality
Q U I C K S TA R T ( c o n t . )
How to change the template color theme You can easily change the color theme of your poster by going to the DESIGN menu, click on COLORS, and choose the color theme of your choice. You can also create your own color theme. You can also manually change the color of your background by going to VIEW > SLIDE MASTER. After you finish working on the master be sure to go to VIEW > NORMAL to continue working on your poster.
How to add Text The template comes with a number of pre-formatted placeholders for headers and text blocks. You can add more blocks by copying and pasting the existing ones or by adding a text box from the HOME menu.
Text size
Adjust the size of your text based on how much content you have to present. The default template text offers a good starting point. Follow the conference requirements.
How to add Tables
To add a table from scratch go to the INSERT menu and click on TABLE. A drop-down box will help you select rows and columns.
You can also copy and a paste a table from Word or another PowerPoint document. A pasted table may need to be re-formatted by RIGHT-CLICK > FORMAT SHAPE, TEXT BOX, Margins.
Graphs / Charts You can simply copy and paste charts and graphs from Excel or Word. Some reformatting may be required depending on how the original document has been created.
How to change the column configuration RIGHT-CLICK on the poster background and select LAYOUT to see the column options available for this template. The poster columns can also be customized on the Master. VIEW > MASTER.
How to remove the info bars
If you are working in PowerPoint for Windows and have finished your poster, save as PDF and the bars will not be included. You can also delete them by going to VIEW > MASTER. On the Mac adjust the Page-Setup to match the Page-Setup in PowerPoint before you create a PDF. You can also delete them from the Slide Master.
Save your work Save your template as a PowerPoint document. For printing, save as PowerPoint or “Print-quality” PDF.
Student discounts are available on our Facebook page. Go to PosterPresentations.com and click on the FB icon.
• This domain is prone to overfiKng and underfiKng. • If we want to do well on a single objec/ve func/on: – Consider an algorithm that memorizes the op/mum.
– This is the best op/mizer since it gets to the op/mum in one step.
• If we want to do well on all objec/ve func/ons: – Given any op/mizer, we can always construct an objec/ve func/on that it performs poorly on.
• Goal: Do well on a class of objec/ve func/ons with similar geometry, e.g.: – Logis/c regression loss func/ons – Neural net classifica/on loss func/ons
• Op/miza/on problems are ubiquitous in science and engineering.
• Devising a new op/miza/on algorithm manually is challenging. Is there a beWer way?
• If the mantra of machine learning is to learn what is tradi/onally manually designed…
Why not learn the op?miza?on algorithm itself?
• The predic/on of the neural net at any point in /me affects the inputs that it sees in the future.
• This violates the i.i.d. assump/on in supervised learning.
• Compounding errors: A policy trained using supervised learning does not know how to recover from previous mistakes.
• A supervised learner that makes a mistake with probability incurs a cumula/ve error of , rather than . (Ross and Bagnell, 2010)
Proper?es of the Learning Problem • Given: a set of training objec/ve func/ons , a distribu/on for ini/alizing the iterate and a meta-‐loss that measures the quality of the iterates .
• An op/miza/on algorithm takes an objec/ve func/on and an ini/al iterate as input and produces a sequence of iterates .
• Goal: learn such that is minimized.
• We choose
Formula?on
Ke Li Jitendra Malik {ke.li,malik}@eecs.berkeley.edu
Learning to Op?mize
Introduc?on SeLng Future Work
Challenges
f1, . . . , fn ⇠ F DL(f, x(1)
, . . . , x
(T ))
x
(1), . . . , x
(T )
Af
x
(0)
x
(1), . . . , x
(T )
A⇤ Ef⇠F,x
(0)⇠D
hL(f,A⇤(f, x(0)))
i
L(f, x(1), . . . , x
(T )) =TX
i=1
f(x(i))
• Input: Recent history of iterates, gradients and objec/ve values
• Output: Step vector • Searching over the space of op/miza/on algorithms reduces to learning the parameters of the neural net.
Parameterizing Op?miza?on Algorithms
Gradient Descent
Momentum
Learned Algorithm Neural Net
�(·) = ��
0
@i�1X
j=0
↵
i�1�jrf(x(j))
1
A
�(·) =
�(·) = ��rf(x(i�1))
O(✏T 2)✏O(✏T )
• The goal of RL is to find: where the expecta/on is taken w.r.t.
• The method we use is Guided Policy Search (Levine and Abbeel, 2014), which alternates between compu/ng target trajectories and training the policy to replicate them. More precisely, it solves:
Reinforcement Learning
⇡⇤ = argmin⇡
Es0,a0,s1,...,sT
"TX
t=0
c(st)
#
q (s0, a0, s1, . . . , sT ) = p0 (s0)T�1Y
t=0
⇡ (at| st, t) p (st+1| st, at)
Cost
State Ac/on Ini/al State Distribu/on Policy Dynamics
Time Horizon
�(·)
State
f(x(i))
Ac/on
Policy
Cost
Experiments • We trained op/mizers for the following classes of low-‐dimensional op/miza/on problems: – Logis/c Regression (Convex) – Robust Linear Regression (Non-‐convex) – Small Neural Net Classifier (Non-‐convex)
• Trained on a set of random problems. • Tested on a different set of random problems.
Logis+c Regression:
Robust Linear Regression: Small Neural Net:
Wider Architecture:
Noisier Gradients:
Wider Architecture and Noisier Gradients:
Wider Architecture and Longer Time Horizon:
Credit: John Schulman
min✓,⌘
E
"TX
t=0
c(st)
#s.t. (at| st, t; ⌘) = ⇡ (at| st; ✓) 8at, st, t
Learning to Op?mize Neural Nets (hWps://arxiv.org/abs/1703.00441)
• Trained op/mizer on the experience of training neural net on MNIST (a single objec/ve func/on).
• Tested it on the problems of training a neural net on Toronto Faces, CIFAR-‐10 and CIFAR-‐100.