Most AI projects do not reach beyond the pilot phase. Because, data driven decision making needs to have clarity of process which deep learning or complex models do not allow for. Thus, it’s high time for a focussed work flow for explaining AI and black box models. I will talk about Explanability and Interpretability of deep learning models using DeepLift and the latest algorithms for XAI, which have come a long way from Variable Importance and Shapley Values. This will help deep learning practitioners to understand and communicate models with their business partners.