What is NDR?

NDR is an artificial intelligence conference. Well, technically, it’s an artificial intelligence, and machine learning, and deep learning, and data science conference. We don’t discriminate :).

NDR-113 is also the main character in Isaac Asimov’s beautiful sci-fi novel, The Positronic Man. The book tells of a robot that begins to display human characteristics, such as creativity, emotion, and self-awareness. We felt that naming our conference after him was an appropriate homage to the story.

NDR is something we think you’re going to love.


Chopin Hall, Palas Ensemble
Iasi, Romania

June 7th, 2018


The good people at Strongbytes and Codecamp, naturally.

Shoot an email to [email protected] for any questions, remarks, or praise you may have, or like our Facebook page to get the latest updates.

You should also subscribe to our newsletter.


Bringing Great Minds Together

Your chance to meet international experts and learn from their experience

We aim to bring together practitioners of data science, machine learning and deep learning. Filled with selected technical talks, our conference is designed to educate and inspire the audience, with experienced speakers discussing their own experiences, showcasing best practices, and business use cases.

An Energising Experience

1 day, 1 track, 11 sessions

At the end of the day you’ll come out excited and exhausted, wanting more. You will get a better understanding of how to build intelligent applications, and see how companies are using intelligent techniques in production. You will find out about new tools and techniques, how you can improve your workflow, or how you can start your data science career.

See you there!

NDR is coming up in

Our Speakers

Conference Schedule

  • NDR, May 28 2020

  • NDR. July 28 2020

  • 09:45 - 10:25

  • 10:25 - 11:00
    We all love linear regression for its interpretability: Increase square meters by 1, that leads to the rent going up by 8 euros. A human can easily understand why this model made a certain prediction. Complex machine learning models like tree aggregates or neural networks usually make better predictions, but this comes at a price: it's hard to understand these models. In this talk, we'll look at a few common problems of black-box models, e.g. unwanted discrimination or unexplainable false predictions ("bugs"). Next, we go over three methods to pry open these models and gain some insights into how and why they make their predictions. We'll conclude with a few predictions about the future of (interpretable) machine learning. Specifically, the topics covered are: What makes a model interpretable? Linear models, trees How to understand your model Model-agnostic methods for interpretability Permutation Feature Importance Partial dependence plots (PDPs) Shapley Values / SHAP The future of (interpretable) machine learning

  • 11:15 - 11:55
    Selling a house in Italy takes on average 3 to 4 months. Casavo is the first Italian instant buyer, and it promises clients that they can sell their house in 30 days. Of course this is not possible without a strong technological infrastructure that is permanently supporting the business in reaching its targets. This talk will present some efficient ways of storing and modelling real-estate data. In particular, it will describe some libraries that are useful for finding and storing geospatial data. Furthermore, different modelling techniques that are used in order to get an accurate estimation of an apartment's price, will be presented.

  • 11:55 - 12:30
    The cloud is for many scenarios the way to process data and apply business logic. But processing data in the cloud is not always the way to go, because of connectivity, legal issues or because you need to respond in near-real time. In this session we dive into how Azure Machine learning and Azure IoT Edge can help in this scenario. Using Azure Machine Learning service, a cloud service to track as you build, train, deploy, and manage models, we train a custom model. When the model is ready we use Azure IoT Edge to deploy the model to an Edge device and find out how it can operate by itself. At the end of the session you have learned how to train a model using Azure Machine Learning and use IoT Edge to deploy this model to an IoT Edge device.

  • 13:30 - 14:10
    The amount of data being collected every day in the world is mind-blowing. Business data is becoming like a digital supermarket where all sorts of information are collected and stored. Many of those data points are repeatedly recorded numeric values, i.e. time series. From the business point of view, an accurate forecast enables a company to adequately allocate resources and/or to react to an anomaly. In this talk, we will focus on forecasting of time series having multiple seasonalities driven by human behavior, long enough history (>2 periods), and high predictability (low noise). We will overview forecasting algorithms: from classical statistical methods (ARIMA, exponential smoothing) through more recent developments (Facebook's Prophet, Bayesian Structural Time Series) to machine learning approaches (recurrent and convolutional neural networks). We will discuss recent Kaggle and M3/M4 competitions and demonstrate the strengths and weaknesses of the methods on publicly available Kaggle dataset (Web Traffic Time Series Forecasting).

  • 14:10 - 14:45
    "Anything that can go wrong will go wrong" – The Murphy’s Law doesn’t apply only to everyday life. Working on AI project is not safe (at all) and I frequently meet my “old friend” Murphy. Discover with me all that can (and will) go wrong on an image classification or object detection task with convolutional neural networks. We will go through this “Murphy’s project”, seeing at each step what are the main pitfalls, how to detect them and (if possible) how to avoid them.

  • 15:00 - 15:40
    Many data scientists spend the bulk of their time doing tasks that do not make full use of their capabilities. Long hours are spent on e.g. preparing and selecting features or making small, incremental model changes. This talk explores how some of the more drudging work can be handed over to computers so that data scientists can focus on the exciting and high value tasks that they, as humans, are uniquely positioned for.

  • 15:40 - 16:15
    In this session we'll dive into recommendation systems: understand the problem of recommendations, machine learning techniques for building such systems, and will focus on modern neural network architectures for recommendations. We will go over a hands on example of creating and training a recommendation model using PyTorch, and explore model design and deployment tradeoffs. Attendees will learn how to apply deep learning to the problem of recommendations and ranking, and how they can leverage PyTorch to rapidly implement recommendation systems for various business use cases.

  • 10:15 - 10:55

  • 10:55 - 11:30
    One of the most important developments in the emergence of deep learning has been the improvement of weight initialization schemes for networks. Naive weight initialization can cause unstable network dynamics during backpropagation in twos ways: first, it can lead to saturated activations functions during a forward pass; and, second, it can result in vanishing/exploding gradients during the backward pass. However, careful weight initialization can help to avoid these forms of instability in network dynamics. This talk will explain the connection between weight initializaton and network stability and show how modern weight initialization techniques result in more stable networks, and hence faster training.

  • 11:45 - 12:25
    One of the most annoying/soul-destroying tasks in Data Science and Analytics is cleaning data. It was not different for us at Prezi as well, but there was a point where we said it’s enough, and we want to minimize the time spent on this. We have gone through several iterations in getting the right data and learned that the key to your success would ultimately lie in how you are capturing that data, to begin with, and providing quick feedback to teams. In this talk, I will cover how we took our need to understand better how our users use our product and how we ended up designing a system for event processing to get those insights. Even though this does not sound hard, we burnt ourselves a couple of times, and we redesigned our data ingestion pipeline a couple of times to get to the state where we are today. We will start by covering how our data ingestion pipeline evolved from starting with semi-structured event data copied to S3 with a bash script to using Avro with Confluent schema registry ingesting events from Apache Kafka with Apache Gobblin to S3. Even though Apache Kafka helps a lot in scaling, just using Kafka is not your silver bullet. We had to introduce multiple components like using Avro format and the schema registry to solve for the missing pieces. We will also cover how we built around this ecosystem to make sure engineers can’t break our system, make it less painful to instrument the right events and how instrumentation works across the various platforms we support (Mac, Windows, IOS, Android, Web).

  • 12:25 - 13:00
    In this talk we present a method to learn word embeddings that are resilient to misspellings. Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out of-vocabulary words. We propose a method combining FastText with subwords and a supervised task of learning misspelling patterns. In our method, misspellings of each word are embedded close to their correct variants. We train these embeddings on a new dataset we are releasing publicly. Finally, we experimentally show the advantages of this approach on both intrinsic and extrinsic NLP tasks using public test sets.

  • 14:00 - 14:40
    Deep learning achieves great performance in many areas, and it’s especially useful for computer vision tasks. However, using deep learning in production is challenging: it requires a lot of effort for developing the infrastructure to serve deep learning models at scale. In this talk, we present the system for image classification we built at OLX. The main requirement for this system is to classify tens of millions of images daily and be able to serve reliably even during peak hours. It took a year and lots of trial and error to arrive at the system we currently use. We’ll present the details of this journey and tell our story: how we approached it initially, what worked and what didn’t, how it evolved and how it’s working right now. Of course, we’ll walk you through the technical details and show how to implement a similar system using Python, AWS, Kubernetes, MXNet, and TensorFlow.

  • 14:40 - 15:15
    Finance was one of the first industries that started using methods which we today collectively call "data science". In this talk Karl Märka, the Head of Data Science at Creditinfo Estonia, a branch of the international credit bureau and business services provider Creditinfo Group, gives an overview of the latest trends in predictive analytics in the financial sector. Why do banks and credit bureaus have large teams of analysts, what are they trying to predict, what problems are they facing in their day-to-day work and where is the industry going in the future?

  • 15:30 - 16:10

  • 16:10 - 16:45
    In technical communication, the main thing is to keep the main thing the main thing. There are multiple ways to ensure this principle. Some of these ways require careful chart fine-tuning. However, there is one tool that is easy to master, fast to apply, and that provides a high return on the investment rate. I refer to chart titles. In this talk, I will show that a proper chart title not only improves the comprehensibility of a chart, but also leads to better charting decisions, and holds the presenter responsible.


Community Partners



Chopin Hall - Palas Congress Hall, Palas Street no. 7A, Iași, Romania 700259