What is NDR?

NDR is an artificial intelligence conference. Well, technically, it’s an artificial intelligence, and machine learning, and deep learning, and data science conference. We don’t discriminate :).

NDR-113 is also the main character in Isaac Asimov’s beautiful sci-fi novel, The Positronic Man. The book tells of a robot that begins to display human characteristics, such as creativity, emotion, and self-awareness. We felt that naming our conference after him was an appropriate homage to the story.

NDR is something we think you’re going to love.

WHERE

Magnum Hall, Hotel International
Iasi, Romania

WHEN
Tuesday
June 4th, 2019

WHERE

Grand Ballroom, Hotel InterContinental
Bucharest, Romania

WHEN
Thursday
June 6th, 2019

WHO’S BEHIND THIS

The good people at Strongbytes and Codecamp, naturally.

Shoot an email to [email protected] for any questions, remarks, or praise you may have, or like our Facebook page to get the latest updates.

You should also subscribe to our newsletter.

Bringing Great Minds Together

Your chance to meet international experts and learn from their experience

We aim to bring together practitioners of data science, machine learning and deep learning. Filled with selected technical talks, our conference is designed to educate and inspire the audience, with experienced speakers discussing their own experiences, showcasing best practices, and business use cases.


An Energising Experience

1 day, 1 track, 10 sessions

At the end of the day you’ll come out excited and exhausted, wanting more. You will get a better understanding of how to build intelligent applications, and see how companies are using intelligent techniques in production. You will find out about new tools and techniques, how you can improve your workflow, or how you can start your data science career.


See you there!

NDR 2019 is coming up in
  • Early-Bird

    Discounted pricing, available only until April 15
  • Access to the full conference, coffee and snacks
  • 69
  • BUY TICKETS
  • Regular

    Full-price tickets, available starting April 16
  • Access to the full conference, coffee and snacks
  • 99
  • UNAVAILABLE

First confirmed speakers in 2019

Boris Gorelik

Boris Gorelik

Data Scientist, Automattic | Speaker @ NDR Iasi
Henk Boelman

Henk Boelman

Cloud AI Architect, Heroes | Speaker @ NDR Iasi
Leonid Kuligin

Leonid Kuligin

Strategic Cloud Engineer at Google | Speaker @ NDR Bucharest
Katarina Milosevic

Katarina Milosevic

Data Scientist at Generali | Speaker @ NDR Bucharest
Ioana Gherman

Ioana Gherman

Data Scientist at Generali | Speaker @ NDR Bucharest

John D. Kelleher

John D. Kelleher

Academic Leader at the Dublin Institute of Technology | Speaker @ NDR Bucharest
Ciprian Jichici

Ciprian Jichici

General Manager, Genisoft | Speaker @ NDR Bucharest
Vlad Iliescu

Vlad Iliescu

Head of AI, Strongbytes, Microsoft MVP | Speaker @ NDR Iasi and NDR Bucharest
Gianluca Campanella

Gianluca Campanella

Data Scientist, Microsoft | Speaker @ NDR Bucharest
Sorin Peste

Sorin Peste

Technology Solutions Professional, Data & AI, Microsoft | Speaker @ NDR Bucharest

 

Schedule

  • June 04 - NDR Iasi


  • June 06 - NDR Bucharest


  • NDR June 07 2018 Iasi


  • 09:45 - 10:15
    Communication is a crucial part of our jobs. Data visualization plays an important role in such communication. Despite much scientific research, data visualization is perceived as a combination of technical and artistic skills. In this lecture, you will learn why this is the wrong way to think of data visualization. You will also learn about the biggest visualization anti-patterns that I have been able to identify during more than 15 years of my professional career. Finally, I will present a methodological approach that fixes most of the problems and guides the practitioner towards an effective visual representation of a data-intensive idea.

  • 09:45 - 10:15
    A neural network model, no matter how deep or complex, implements a function, a mapping from inputs to outputs. The function implemented by a network is determined by the weights the network uses. So, training a network (learning the function the network should implement) on data involves searching for the set of weights that best enable the network to model the patterns in the data. The most commonly used algorithm for learning patterns from data is the gradient descent algorithm. By itself the gradient descent algorithm can be used to train a single neuron, however it cannot be used to train a deep network with multiple hidden layers. Training a deep neural network, involves using both the gradient descent algorithm and the backpropagation algorithm in tandem. These algorithms are at the core of deep learning and understanding how they work is, possibly, the most direct way of understanding the potential and limitations of the deep learning. This talk provides a gentle but still comprehensive introduction to these two important algorithms. I will also explain how the problem of vanishing gradients arises, and how this problem has been addressed in deep learning. Talk type: Deep-Learning Beginner-Intermediate Talk

  • 10:15 - 10:45
    We are going to discuss most common scenarios if you are going to move your ML workloads to Cloud (and why you might be doing this). We’ll talk about organizing the training in the most effective way, and we’ll briefly discuss the difference between different types of accelerators. Afterwards we are going to discuss how Tensorflow deals with very large datasets or complicated models with long training time. Why would you like to move your training workloads to Cloud: common scenarios Rent a single VM Using tensorflow estimators API. Managing costs: training with preemptibles. Accelerators: CPUs vs GPUs vs TPUs Intro to distributed training: data parallelism vs. model parallelism. How tensorflow handles large datasets Running Tensorflow on GCP: Google Cloud ML Engine vs. Kubeflow

  • 10:45 - 11:15
    For decades, the quest to build the ultimate model that helps understanding the hearts and minds of customers has been on the top of the agenda for both business decision makers and data scientists. The exponential growth of the quantities of data available as well as the ever-increasing sophistication of customers made this model a moving target and a rather difficult one to achieve. Finally, technology has caught up and we’re ready to move into the era of the Customer Model – a complex, powerful, and comprehensive approach that promises to open a new chapter in the way we model customer behavior. Starting from hundreds or thousands of features capturing a wide range of aspects related to this behavior, advanced deep learning models can be trained to encode and measure it in a highly efficient way. Easy and cost-effective access to thousands or tens of thousands of GPU cores through services like Azure Machine Learning or Azure Databricks enables such complex deep learning models to become a viable option. The journey to reach the ultimate customer model is not without difficulties though. From “simple” problems like encoding categorical features with thousands of distinct values up to the difficult task of designing efficient deep learning encoders, there are many challenges out there. The session will help you better understand them and implement efficient solutions. Packed with lots of deep learning demos, it builds on Ciprian’s real-world expertise he gained building advanced customer models for a wide range of customers and verticals.

  • 09:15 - 09:45
    How can organizations optimize their sales channels and product targeting? Can you automate first line of support and improve customer satisfaction? How do I protect my online payment channel from frauds? These and more questions are addressed in this session about building smarter business applications that leverage the capability of Artificial Intelligence technologies. Come and see in practice Azure Machine Learning, Microsoft Cognitive Services and the Bot Framework for building intelligent applications that analyze data and predict better outcomes for businesses.

  • 09:45 - 10:15
    Online advertising is an essential component of any business strategy. Every year, the investment on online advertising grows for mobile and web. To meet this growing demand, many online ad publishers build their own ad serving platforms to manage and deliver ad inventory. As a consequence, the need of click prediction systems are crucial for the success of such systems. In this talk, I will introduce the importance of click prediction in ad servers and some of the challenges found when building click prediction models. I then explore some of the most simple algorithms used to tackle click prediction as well as some of the parameters that mostly impact performance.

  • 10:30 - 11:00
    An insight into the creation of a graph based, quantum inspired neural network that outperforms the Big Players (Google, IBM, Microsoft and Alexa) in Natural Language Processing.

  • 11:00 - 11:30
    Common approaches to measuring how well a new model performs can be highly misleading, and simply picking the one with the highest precision/recall can ruin your product. I'll explain how and look at some simple approaches you can use in your workflow to combat this which we use in Dimensions, as well as some larger organisational changes that may be required.

  • 11:45 - 12:15
    Recommender systems are used to increasingly shape your behavior online, recommending you everything from the clothes you wear to the music you listen to, to the people you become friends with. In this talk we will take a look at the major types of recommender systems, how they work including advantages and disadvantages, and how they can be used effectively.

  • 12:15 - 12:45
    This talk will give an introduction to neural networks and how they are used for machine translation. The primary goal of the talk is to provide a deep enough understanding of NMT that so that the audience can appreciate the strengths of weaknesses of the technology. The talk starts with a brief introduction to standard feed-forward neural networks (what they are, how they work, and how they are trained), this is followed by an introduction to word-embeddings (vector representations of words) and then we introduce recurrent neural networks. Once these fundamentals have been introduced we then focus in on the components of a standard neural-machine translation architecture, namely: encoder networks, decoder language models, and the encoder-decoder architecture.

  • 13:45 - 14:15
    At first sight, forecasting looks like another regression problem; however, time series pose unique statistical challenges that require specialised models. Starting with some common mistakes (and fixes!) in time series analysis, we will then introduce an array of techniques from classical ARIMA to neural networks, with a short Bayesian detour. Different methods will be illustrated and compared using a large spatio-temporal dataset as motivating example. We conclude with some modelling recommendations and strategies to tackle general forecasting problems.

  • 14:15 - 14:45
    Reinforcement Learning is learning what to do – what action to take in a specific situation – in order to maximize some type of reward. It’s one of the most promising areas of Machine Learning today. It plays an important part in some very high-profile success stories of AI, such as mastering Go, learning to play computer games, autonomous driving, autonomous stock trading, and more. In this talk we’ll introduce the main theoretical and practical aspects of Reinforcement Learning, discuss its very distinctive set of challenges, and explore what the future looks like for self-training machines.

  • 15:00 - 15:30
    User satisfaction surveys are a common and powerful tool in helping customer experience teams improve their product, by helping them understand which parts of the user experience contribute most to a given outcome. However, they suffer from two disadvantages: first, it is difficult and time-consuming to design good survey questions and to analyze results, and second, convincing many users to complete multi-page, monotonous surveys is difficult and a bad user experience. In this talk, we explore techniques such as clustering, natural language understanding, and summarization in order to enable customer experience teams easily derive insight from a single open-ended question rather than a long sequence of very specific multiple-choice questions.

  • 15:30 - 16:00
    Deep Learning is the buzzword of the day in IT. Fueled by the significant advancements generated by GPUs and lately by FPGAs, deep learning is on the path of becoming ubiquitous. Yet most people are unaware of the fact that the first incarnation of a neural net, the perceptron, has its 60th birthday this year. Once almost deemed as a “dead end”, neural nets, represented by their most preeminent incarnation – the deep learning nets, are coming back into the public spotlight with a vengeance. Join me in this session to discover the inner workings of deep learning networks, their advantages and pitfalls, as well as their areas of applicability. I’ll cover the history and evolution of the field as well as its present state of the art. We’ll talk about the most popular deep learning platforms as well as about how the cloud and the intelligence edge enable together a broad range of scenarios to be addressed.

  • 16:15 - 16:45
    Azure is huge – there are some many choices to make and new options seem to arrive every day. But which to chose? And why? In this session we will explore the various options of doing Artificial Intelligence on Azure and also do a demo of the latest and greatest technology available for you to use today. Even a sneak peak into the future will be provided.

Partners NDR Iasi

Location