This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Advanced Machine Learning Practical Applications and Use cases
Conference Notes on some interesting practical applications. Implementation approaches are explained in related videos and slides.
Video Links ~ Data Science Summit 2016 ~ ML Conf 2016 ( slides - http://www.slideshare.net/SessionsEvents/presentations )
— Advanced Recommender Applications ~ Deep Personalization Techniques by Amazon - https://www.youtube.com/watch?v=MMu0nDA-nog , https://www.youtube.com/watch?v=ofaPq5aRKZ0&index=10&list=PLrbAIdPI69Pi88waiIv8gZ3agEU_hBaVM
~ (Advanced Recommender for Marketing) https://www.youtube.com/watch?v=2DHNwpu_jKg ~ (Graph Reco Algo by Teachers-payTeachers) https://www.youtube.com/watch?v=oFXVbvVqHpY ~ (Stitch-Fix Synthesize human-machine capabilities) https://www.youtube.com/watch?v=flLCO6An6xI ~ (Community Detection) https://www.youtube.com/watch?v=uS6ifgdp86w ~ (Explain the Recommendations) https://www.youtube.com/watch?v=ABweFa7Y6aA
Using LSTMs for non-parametric hierarchical model to share statistical strengths and past observations ~ as continuous inputs to next layers.
Instead of estimating parameters , the functions are estimated.
Why LSTM ? As we have to take relative temporal decision which should be adjusted gradually
Build model gradually and calculate survival probabilitye.g. find the prob user will survive till time t without using an app
When the user opens up the app ? Is the User really going to like the movie 2 weeks from now ?
Did the taste for the product change over time for an user ?
Is he going to come back tomorrow , next week , never ?
How engaged the user is over time ?
The answer is not a Binary decision (like switching to different Insurance company or credit provider) but a relative temporal decision which should be adjusted gradually ….
So how does the LSTM look like ?
The central Cell models the latent space of the user.Xt is the observation
Now we get a global model of how the world changes :
We also learn the individual state !
Overall , for large number of temporal data , avoid feature engineering …..
Lets imagine we have LSTM for Movie Attribute , LSTM for User Attribute
Every time user watches movie , something changes (either cluster
membership) due to user interest …
so consider every movie as a function of time … not just collection of attributes ….
Normal Memory Cells - read , write , erase .
LSTM cells can - read 30% , erase 60% … allowing the cells to be Differentiable … the cells hold on for a while , erase at later point of time in a sequence …. add long range dependency ….
e.g. look back 20 words to find a hint …
State-based DataFlow Computation
>> remember and propagate biases so that a node can update it at later stage .
>> nodes automatically update the model and add operations to
calculate symbolic gradients of variables w.r.t. loss function
Collaborative Usecase — Correlational ( user / item based) -> finds the relationships between similar entities — Dimensionality reduction (via Matrix Factorization) —> collapses everything into attribute space
A movie can obtain various rating by various users based on different attributes and various movies similar rating .
Many users provide similar rating to certain movies based on various attributes.
** Use a compression algo like SVD to reduce the number of Dimensions by factoring it into a set of Latent Dimensions **
** Recommendation Best Practices ** — should recommend something that was not previously recommended to the same user
— don’t just show ‘what I recently liked’ , factor into some human behaviors and show some new suggestions !
— diversification can add values even when accuracy decreases.
— serendipity : user loves temporal suggestions - what other users like at this moment
Ref: Kapoor / Kumar - Adapting to Dynamic User Novelty Preferences Ref : User Perception of differences in recommender algorithms. In Proc Sys ’14 Ref : Using Groups of Items for Preference Elicitation in Recommender Systems. Proc. CSCW ’15
How can we assert that the Recommendations are correct ?
Content-based Explanation—
Entity-based Explanation— show correlated Named-Entity (Person, Location) and highlight
Usage-based Explanation— mention Frequently Bought Together items
How to ensure the algorithm works without any issues ?
Reference : coursera course on Recommendation System , lenskit.org
does PCA to explain most variations, Matrix Factorization to find latent attributes (to find relevancy to other customers) , mix-match model (relationship been Product and customer attributes) , ANN (for image and text data)
— Final Ranked products sent to Queue , then human() also triggers human activities (curates / personal touch)
* Feature Space Computation — TF/IDF transform on documents (stop words removal, stemming, pos selection, spell-checking etc.)* Synonym computation— use word embedding (glove , word2vec)— can build synonym graph (using wikipedia / dictionary)— unsupervised algo* Sentiment Computation— use VADER ( valence Aware Dictionary and Sentiment Reasoner — find the intensity of the sentiment as a prob density function (pdf)
1) as always - first do the Exploratory Data Analysis - Find most important items with good number of review scores - Show a rolling average of review score trend over time - Discard very long review / short reviews and that can’t be tokenized well
2) apply NLP - to find tokens in Review
3) then perform Dictionary-based Sentiment analysis per review
4) store all result for index / search / lookups in ES using msgpack format
https://www.youtube.com/watch?v=L0zQh-ii3sI
(B3) Medical Treatment Data Analysis
Find who need to be vaccinated ? Who fits this Clinical Trial ? Who is at risk for sepis ?Who on this protocol didn’t have this side effect ? Who is getting Meds they are allergic to ?
(C1) Video Demo ~ https://www.youtube.com/watch?v=IQXkq0_rruU
(C2) Advertisement Prediction and Optimization - by AOL
Video Link ~
Advertisers want clicks or conversion predictions
Every time an user visits ABC.com , it opens up an Advertising opportunity for ABC by doing a real-time bidding on behalf of ABC ! After winning the bids, the ads need to placed !
Statistical Signal processing - is used to extract signal from non-stationery data.
predicted conversion rate (yellow) catches up with blue line (observed)
Once ad is won , boost and explore performance of Ad, then exploit the information gain —
Classical Operations Research theory (Multi-Arm bandits Theory) is applied ~ to ~ find the Values associated to information gained when showing a specific AD is estimated using techniques similar to option valuation !!!!
(D) Personalized Content Blending - A Complex combination of UsecasesPinterest Video Link ~ https://www.youtube.com/watch?v=mN6MrzL1i78
Data : Large Bipartitie Graph of 10B+ pins and 800M+ collections of PinterestUseCases :
Pin and Board RecommendationsNew-user interest RecommendationsUser action prediction (drives Ads and monetization)Email timings, frequency, content (Notifications)Which pins are related to given pinsPin Rankings (common topics)
Interesting Problems: Product Comprehension - optimize the business metric - WAU28 - (weekly active user after 28 days360 RecommendationA brand new user comes :> How to keep him engaged ? > what topics to recommend ? > let him select a list of interests >
Given a pin what interest it belong to ?
How to translate text images into different languages ?
How to add captions to images ?
How to identify a cluster of buyers and send a product recommendations email to the cluster ?
conventional collaborative filtering will not solve the problem ….
— coeffs are hard to interpret— lack of distance measurementSo lets pre-cluster i.e. groups of buyers - based on behavior
Create a User Node and connect 2 User Nodes if they have made a common purchase.Multiple common purchase will get a higher edge weight.
Now run a Community Detection Algo on the Graph
Query the Edges to find common item category , then label clusters as say ‘Buyer-ItemType1’ , ‘Buyer-ItemType2’ etc.
To optimize, run the Algo on fraction of Users , then perform Label Propagation - to fill the rest .
Now query , last 4 weeks what this cluster is buying !Accordingly, recommend the most purchased items by a cluster - to an user
(E) Anomaly & Time-series Pattern Detection
(E1) Complex Time Series Data Analysis
Cluster spatiotemporal distribution of housing data - based on underlying price dynamics
Video Link ~ https://www.youtube.com/watch?v=E9XTOnEgqRY
Dynamical Modeling …. complete Unsupervised Learning !>> (Evolution) dynamics across time for a specific series >> (Interaction Structure)Goals : prediction , forecasting , classification , retrospective analysis , interpretation
Segmentation into behaviors !How to capture interaction between the time series ?
How to discover structure across time ? The answer is Segmentation into behaviors
One Level of Neurons activated by the feature, learning its a CAT .. next a Supervised Learning is used to adjust the model which tells its a DOG , so the Deep Learning Network updates its learning …..
Future possibilities ~ building personal assistants , health care systems (medicine production) by - Combining Vision with Robotics
(F2) Recurrent Neural Network to generate Smart Reply of Gmailhttp://www.slideshare.net/SessionsEvents/anjuli-kannan-software-engineer-google-at-mlconf-sf-2016
(G1) Using Deep Reinforcement Learning for Dialog System
Video Demo ~
Reinforcement Learning - is a Data-driven Approach for learning behavior
find the most suitable behavior i.e. estimate a function that maps environment state to actions
RL is the best approach as> its not important to specify a function > easy to identify correct output> easy to specify behavior
A Robot - is in state theta1, theta2, wt1, wt2 (angles of movement)Action - clockwise torque Goal - balance the movement
RL doesn’t need good policy RL doesn’t need labelled dataRL adapts to environment changes (sample distribution changes during learning)
Deep RL - > variation of Q-Learning that uses deep neural network and random drawing of data> uses 2 networks - regular Q and Q’ to mitigate non-stationery updates > deep RL is applied directly to the belief-state space due to strong generalization properties to find an effective policy
(H) Natural Language
Understanding
Enrich UMLS Ontology with the annotations derived through Word2Vec (from MIMIC streams)
(I) Applying Neural Turing Machine
these are differentiable turning machines (sharp functions made smooth , trains with back propagation) -learns simple algorithms (copy, repeat, recognize simple formal language) -generalizes quickly (specially for Language Modeling) -Note : LSTM
Special techniques like - Gradient Clipping , Loss Clipping and Adam’s Optimizer are used.
Grand Unified Theory of Machine Learning — Representation : probabilistic logic >> Markov Logic Network
Formula : present the model as combination of first-order logics and bayesian network Weights : — Evaluation : > find the Hyposthesis with highest posterior probability > whats the objective function — say business wants to optimize ROI (objective function) — Optimization : > discover best formula (Genetic Programming) > learn the weights of the formula (Bacprops)