DEEp learning

DEVCON 2023

Organized BY

May 27

Virtual

3rd Edition

Deep Learning DevCon (DLDC) is an influential conference on deep learning. Join us to hear some of the leading professionals & researchers that are pushing the boundaries of this very interesting area.
0 +
Attendees
0 +
Speakers
0 +
Organizations
0 HR
Workshops
0 +
Sessions
0 +
Papers Submitted

bring out the

latest research

and

advancements in

DEEP LEARNING

Since deep learning is such a broad subarea of ​​artificial intelligence & machine learning, the volume of problems derived from this approach is very extensive. DLDC plans to uncover those approaches and bring out the latest research and advancements in this field.

The summit will feature talks, paper presentations, exhibitions & hackathons. There will also be a track on workshops on deep learning.

Visionary Speaker

Worldwide Event

Level-up Your Skills in Deep learning

Find Your Tribe

Call

for

papers

DLDC welcomes submissions reporting research that advances deep learning, broadly conceived. The conference scope includes all subareas of deep learning. We expressly encourage work that cuts across technical areas or develops DL techniques in the context of important application domains, such as healthcare, sustainability, transportation, and commerce.

Submissions closing
World Class

Past

Speakers

Here are our speakers from last 2 years. We are in the process of finalizing the speakers. Please check back this page again.
Free

May 27

Virtual

3rd Edition

Deep Learning DevCon (DLDC) is an influential conference on deep learning. Join us on to hear some of the leading professionals & researchers that are pushing the boundaries of this very interesting area.

Past

Sponsors

Last Year's Schedules

Expand All +
  • Day 1

    September 23, 2021

  • Day 2

    September 24, 2021

  • Artificial Intelligence and Deep Learning are evolving rapidly with some new and exciting advances every day. There is also a race among the countries to establish a dominant position in AI. This keynote is about the latest advances in Deep Learning and the latest applications of AI worldwide.
    Tech Talks

  • With privacy being the buzzword in data collection and analysis, how should the tech world be prepared for a differentially private world?
    Tech Talks

  • Formula E has gained popularity as a sustainability-conscious sport that originates innovations to improve electric vehicles. The premise behind Formula E, is not only that the cars are fully electric, but that the 11 teams, each with two drivers, compete in identically set-up, electric battery-powered race cars. Rich within each season, performance data on racing dynamics, the driver and the car itself across the past seven seasons are excellent sources to provide a reliable basis for any forecasting/simulation using state of the art optimization and data science algorithms. With rich sources of available data ranging from past driver performances, their lap times, standings in past races, weather, and car technical information such as battery, tires, engine; data scientists can predict the number of laps a car can complete by quantifying behavioural attributes such as driver’s risk-taking appetite, and other traits such as track information, weather which can affect the races. The purpose of this exercise is to define the approach to use historical data to predict the number of laps a car would finish in 45min for an upcoming race. We built an ensemble model with combination of an intuitive mathematical model and an instinctive deep learning model to predict the number of laps at the end of every race.
    Tech Talks

  • Data scientists need current, clean consolidated data sets to unlock the potential of machine learning. However, they often face challenges due to infrastructure constraints and data that is siloed and outdated. A data strategy with a scalable, real-time cloud data platform as a central pillar, is key to address these challenges. In this session, experts Shaji Thomas and Swagata Maiti from Ugam, a Merkle company, will deep-dive into 7 techniques which will help you build a scalable data platform. These techniques include automated data validation, reusable feature stores, streaming ingestion, transformation of IoT sensor data and more.
    Tech Talks

  • In machine learning – we are accustomed to traditional ways of representing data, where we assume a set of i.i.d data points represented by tabular features and we try to learn a statistical model on this dataset. In real life, information flows through networks of interacting entities, and individual data points are interconnected through rich, informative relationships. Imagine the data representing social networks, search engines, geospatial systems, question answering systems, protein interaction networks, even molecular structures. Graphs are a natural choice to represent these use cases. We connect data points (entities/nodes) with relationships(edges) and optionally embed each node and edge with features. Where traditional machine learning algorithms fail to exploit these inductive biases, graph neural networks (deep neural networks that operate on graphs) have been successfully applied and are seeing an increase in adoption by the industry. This talk will cover how to model datasets as graphs and using GNN to solve use-cases like node classification, link prediction, clustering, and learning high-quality embeddings.
    Tech Talks

  • A huge volume of the information in an enterprise flows through documents and understanding the structure of documents allows to extract relevant and meaningful information. Documents may be of different types and formats including native PDFs, scanned images, web-pages etc. They also have different templates which make document processing and understanding a tedious task. More specifically, financial documents such as audited reports, bank statements, financial reports, exchange filings etc. are important documents that needs to be analyzed for multiple compliance and risk related applications such as underwriting, risk rating etc. This talk focuses on Document AI, i.e., AI powered automated analysis of documents. We share our R&D efforts at American Express and demonstrate how Document AI enabled products can drive innovation and efficiency at scale.
    Tech Talks

  • Computer Vision is the field of AI that helps us understand visual data such as pictures and videos using advanced algorithms. Till late 1990s, people mainly used hand-crafted features for Computer Vision tasks and such methods were complex and inefficient. Neural Networks were used in late 1990s but were not very successful due to lack of good training data and limited computing power. In 2009, Imagenet provided large labeled training data and started a revolution in the Computer Vision field. Application of Convolutional Neural Network (CNN) gave impressive results in this area. Alexnet, Resnet and Yolo are some breakthrough models that led to massive progress in the field. Deep Learning has been widely used in image classification, object detection, and image segmentation tasks. Computer Vision plays a critical role in self-driving cars. Generative Adversarial Network (GAN) is an exciting development in Deep Learning that can be used to create realistic artificial images and videos
    Tech Talks

  • This talk is an insightful discussion of my Deep Learning AutoML library, Deep AutoViML. deep_autoviml is a powerful new deep learning library with a very simple design goal: Make it as easy as possible for novices and experts alike to experiment with and build tensorflow.keras preprocessing pipelines and models in as few lines of code as possible. Agenda of the talk includes: 1. Top features of the library. 2. How the library can be used in the context of MLOps.
    Tech Talks

  • This talk will be about our recent work on using deep learning techniques to predict credit ratings for global corporate entities; obligations are compared to most popular, traditional machine-learning approaches such as linear models and tree-based classifiers. The results of this work also demonstrated an adequate accuracy over different rating classes when applying categorical embeddings to artificial neural networks (ANN) architectures.
    Tech Talks

  • Text data inherently have lot of noise like the punctuations, stop words etc. which needs to be removed before building a machine learning model. To build an ML model out of text the text data needs to be vectorized. Text vectorization can be done using frequency-based techniques or using some of the neural net based pre-trained models. In this session you will learn about: 1. Steps to clean the text data 2. Obtain inferences through text data through word cloud and bar graphs 3. Vectorize text 4. Build ML models for text classification 5. Apply Neural net pre-trained models for text classification
    Workshop

  • We present TEEN, an industry grade solution to the problem of time expression extraction and normalization (Timex). Extraction and normalization of temporal units is a challenging problem due to several factors, e.g., (i) same time units may be expressed in different ways, (ii) inherent ambiguity in natural languages leading to multiple interpretations, and (iii) context sensitive nature of natural languages. While various academic and industrial approaches have presented solutions towards Timex, building an industry strength solution involves additional challenges in the form of user expectations, need for delivering high precision, and lack of training corpora. We elaborate how TEEN carefully mitigates these challenges. We demonstrate how the proposed approach compares with various state-of-the-art baselines on textual data from finance industry. We further categorize inadequacies of these baselines in an industrial setting. Finally, we provide insights gathered through the observations we made and the lessons we learned while designing TEEN to work in an industrial setting.
    Paper Presentation

  • The universe is the vast expanse of cosmic space consisting of billions of galaxies. Galaxies are made of billions of stars that revolve around a gravitation center of the black hole. Quasars are quasi-stellar object which emits electromagnetic radiation more potent than the luminosities of the galaxies, combined. In this paper, the fourth version, the Sloan Digital Sky Survey (SDSS-4), Data Release 16 dataset, was used to classify the SDSS dataset into galaxies, stars, and quasars using machine learning and deep learning architectures. We efficiently utilize both image and metadata in tabular format to build a novel multi-modal architecture and achieve state-of-the- art results. For the tabular data, we compared classical machine learning algorithms (Logistic Regression, Random Forest, Decision Trees, Adaboost, LightGBM, etc.) with artificial neural networks. Deep learning architecture such as Resnet50, VGG16, EfficientNetB2, Xception, and Densenet121 have been used for images. Our works shed new light on multi-modal deep learning with their ability to handle imbalanced class datasets. The multi- modal architecture further resulted in higher metrics (accuracy, precision, recall, F1 score) than models using only images or tabular data.
    Paper Presentation

  • Most AI projects do not reach beyond the pilot phase. Because, data driven decision making needs to have clarity of process which deep learning or complex models do not allow for. Thus, it's high time for a focussed work flow for explaining AI and black box models. I will talk about Explanability and Interpretability of deep learning models using DeepLift and the latest algorithms for XAI, which have come a long way from Variable Importance and Shapley Values. This will help deep learning practitioners to understand and communicate models with their business partners.
    Tech Talks

  • Prototyping a high confidence AI/ML model to a target business problem statement is the first step to operationalize ML. Further, making the ML model available for business to aid effective decision making is the end goal. This session will explore the landscape and look at different approaches to deploy and scale ML models to production. Additionally, the session will introduce few approaches for ML lifecycle management. The session will include hands-on experience to attendees on model serving using a pretrained model with an overview of the training pipeline. This working session will look at: Hands-on approach to pipelines and their orchestration using TFX/Airflow. API/SDK approach to model deployment as a web service with Flask Pre-requisite: Laptop with minimum of 8 GB ram with Windows/Linux/MacOS Anaconda (individual edition) installed Good internet connection to download coding stubs and pretrained model Docker desktop installed Basic familiarity with google collab with a google account
    Workshop

  • Leaks have undoubtedly been one of the biggest problems plaguing piping and cabling systems across industries like electricity and power, building and smart cities, oil and gas, etc. Addressing these leaks in time becomes paramount as failure leads to a complete standstill of the transportation chain. Most AI based leak detection systems have failed to reach the deployment state as these systems are prone to output false positives. It is imperative to observe that these leaks don’t occur every day or in other words they are rare events. But when they do occur, these leaks more often than not go unnoticed. Due to the insufficient number of identified leak points, it becomes difficult to build an AI based model for the same. In an attempt to aid/replace rule-based and physics-based leak detection systems, this paper proposes a novel AI based leak detection solution using reinforcement learning which not only reduces false positives but also extends itself to multi armed bandit-based leak localization. By using this methodology, we model the latent behavior of any piping or cabling systems and provide a Q-learning based shortest path recommendation in order to help the maintenance team reach the leak node in a short amount of time.
    Paper Presentation

  • With the ever-increasing use of complex machine learning models in critical applications, explaining the model’s decisions has become a necessity. With applications spanning from credit scoring to healthcare, the impact of these models is undeniable. Among the multiple ways in which one can explain the decisions of these complicated models, local post hoc model agnostic explanations have gained massive adoption. These methods allow one to explain each prediction independent of the modelling technique that was used while training. As explanations, they either give individual feature attributions or provide sufficient rules that represent conditions for a prediction to be made. Current state of the art methods use rudimentary techniques to generate explanations using a fairly simple surrogate model. The data structures they employ restrict them to local explanations which fail to scale up to big data. In this paper, we come up with a novel pipeline for building explanation. A Generative Adversarial Network is employed for generating synthetic data, while a piecewise linear model in the form of Linear Model Trees is used as the surrogate model. The combination of these two techniques provides a powerful yet intuitive data structure to explain complex machine learning models. The novelty of this data structure is that it provides explanation in the form of both decision rules and feature attributions. The data structure also enables us to build global explanation model in a computationally efficient manner such that it scales with large amounts of data.
    Paper Presentation

  • Digital advertising enables advertisers to promote their products on various online and digital channels. Real Time Bidding is an advanced advertising method which allows advertisers to target potential buyers and acquire ad space on websites, in the form of programmatic auctions. Hundreds of billions of such auctions take place every day. Predicting future performance and developing customized targeting strategies enables advertisers to make better use of their budgets and ultimately improve their KPI measures. With Google's recent policy on phasing out third-party cookies, organizations around the world are shifting more towards data-protected contextual targeting strategies. This paper proposes a machine learning-based approach to predicting future ad-campaign performance by focusing on contextual features such as browser, operating system, device type, and so on. First, a custom metric encompassing cost, performance and campaign delivery is developed. This metric's predicted value is used to score and recommend targeting strategies. To generate new features, feature engineering techniques such (The CMO's Guide to Programmatic Buying, n.d.)as Lag feature generation, statistical encodings, Graph-based embeddings for websites and cyclical feature encodings are prepared for downstream tasks such as CTR prediction This paper then compares and chooses the best performing linear, tree-based, and deep learning models for our proposed hypotheses. Finally, we develop heuristic criteria for offline testing of the recommended strategies and calculate theoretical performance uplift for comparison with older ranker score methodologies, which serve as our baseline results. Our proposed solution outperforms the baseline solution and proposes a novel way of recommending strategies.
    Paper Presentation

  • Predictive model design for accurately predicting future stock prices has always been considered an interesting and challenging research problem. The task becomes complex due to the volatile and stochastic nature of the stock prices in the real world which is affected by numerous controllable and uncontrollable variables. This paper presents an optimized predictive model built on long- and-short-term memory (LSTM) architecture for automatically extracting past stock prices from the web over a specified time interval and predicting their future prices for a specified forecast horizon, and forecasts the future stock prices. The model is deployed for making buy and sell transactions based on its predicted results for 70 important stocks from seven different sectors listed in the National Stock Exchange (NSE) of India. The profitability of each sector is derived based on the total profit yielded by the stocks in that sector over a period from Jan 1, 2010 to Aug 26, 2021. The sectors are compared based on their profitability values. The prediction accuracy of the model is also evaluated for each sector. The results indicate that the model is highly accurate in predicting future stock prices.
    Paper Presentation


May 27

Virtual

3rd Edition

Deep Learning DevCon (DLDC) is an influential conference on deep learning. Join us on to hear some of the leading professionals & researchers that are pushing the boundaries of this very interesting area.