What is NDR?

NDR is an artificial intelligence conference. Well, technically, it’s an artificial intelligence, and machine learning, and deep learning, and data science conference. We don’t discriminate :).

NDR-113 is also the main character in Isaac Asimov’s beautiful sci-fi novel, The Positronic Man. The book tells of a robot that begins to display human characteristics, such as creativity, emotion, and self-awareness. We felt that naming our conference after him was an appropriate homage to the story.

NDR is something we think you’re going to love.


Chopin Hall, Palas Ensemble
Iasi, Romania

June 7th, 2018


The good people at Strongbytes and Codecamp, naturally.

Shoot an email to [email protected] for any questions, remarks, or praise you may have, or like our Facebook page to get the latest updates.

You should also subscribe to our newsletter.


Bringing Great Minds Together

Your chance to meet international experts and learn from their experience

We aim to bring together practitioners of data science, machine learning and deep learning. Filled with selected technical talks, our conference is designed to educate and inspire the audience, with experienced speakers discussing their own experiences, showcasing best practices, and business use cases.

An Energising Experience

1 day, 1 track, 11 sessions

At the end of the day you’ll come out excited and exhausted, wanting more. You will get a better understanding of how to build intelligent applications, and see how companies are using intelligent techniques in production. You will find out about new tools and techniques, how you can improve your workflow, or how you can start your data science career.

See you there!

NDR is coming up in

Our Speakers

Conference Schedule

  • NDR, July 28 2020

  • NDR, May 28 2020

  • 09:45 (EEST) - 10:25
    One of the most annoying/soul-destroying tasks in Data Science and Analytics is cleaning data. It was not different for us at Prezi as well, but there was a point where we said it’s enough, and we want to minimize the time spent on this. We have gone through several iterations in getting the right data and learned that the key to your success would ultimately lie in how you are capturing that data, to begin with, and providing quick feedback to teams. In this talk, I will cover how we took our need to understand better how our users use our product and how we ended up designing a system for event processing to get those insights. Even though this does not sound hard, we burnt ourselves a couple of times, and we redesigned our data ingestion pipeline a couple of times to get to the state where we are today. We will start by covering how our data ingestion pipeline evolved from starting with semi-structured event data copied to S3 with a bash script to using Avro with Confluent schema registry ingesting events from Apache Kafka with Apache Gobblin to S3. Even though Apache Kafka helps a lot in scaling, just using Kafka is not your silver bullet. We had to introduce multiple components like using Avro format and the schema registry to solve for the missing pieces. We will also cover how we built around this ecosystem to make sure engineers can’t break our system, make it less painful to instrument the right events and how instrumentation works across the various platforms we support (Mac, Windows, IOS, Android, Web).

  • 10:25 (EEST) - 11:05
    Deep learning achieves great performance in many areas, and it’s especially useful for computer vision tasks. However, using deep learning in production is challenging: it requires a lot of effort for developing the infrastructure to serve deep learning models at scale. In this talk, we present the system for image classification we built at OLX. The main requirement for this system is to classify tens of millions of images daily and be able to serve reliably even during peak hours. It took a year and lots of trial and error to arrive at the system we currently use. We’ll present the details of this journey and tell our story: how we approached it initially, what worked and what didn’t, how it evolved and how it’s working right now. Of course, we’ll walk you through the technical details and show how to implement a similar system using Python, AWS, Kubernetes, MXNet, and TensorFlow.

  • 11:15 (EEST) - 11:55
    One of the most important developments in the emergence of deep learning has been the improvement of weight initialization schemes for networks. Naive weight initialization can cause unstable network dynamics during backpropagation in twos ways: first, it can lead to saturated activations functions during a forward pass; and, second, it can result in vanishing/exploding gradients during the backward pass. However, careful weight initialization can help to avoid these forms of instability in network dynamics. This talk will explain the connection between weight initializaton and network stability and show how modern weight initialization techniques result in more stable networks, and hence faster training.

  • 11:55 (EEST) - 12:35
    Even if it is on the decreasing slope of the hype cycle, automated driving stays a realizable, challenging and attractive technology field from both series development and research point of view. The presentation will show the main trends in the ongoing development, some of the remaining challenges ahead, and how Bosch Engineering Center Cluj is contributing to this leading-edge development.

  • 13:20 (EEST) - 14:00
    In this talk we present a method to learn word embeddings that are resilient to misspellings. Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out of-vocabulary words. We propose a method combining FastText with subwords and a supervised task of learning misspelling patterns. In our method, misspellings of each word are embedded close to their correct variants. We train these embeddings on a new dataset we are releasing publicly. Finally, we experimentally show the advantages of this approach on both intrinsic and extrinsic NLP tasks using public test sets.

  • 14:00 (EEST) - 14:40
    Finance was one of the first industries that started using methods which we today collectively call "data science". In this talk Karl Märka, the Head of Data Science at Creditinfo Estonia, a branch of the international credit bureau and business services provider Creditinfo Group, gives an overview of the latest trends in predictive analytics in the financial sector. Why do banks and credit bureaus have large teams of analysts, what are they trying to predict, what problems are they facing in their day-to-day work and where is the industry going in the future?

  • 14:50 (EEST) - 15:30
    In technical communication, the main thing is to keep the main thing the main thing. There are multiple ways to ensure this principle. Some of these ways require careful chart fine-tuning. However, there is one tool that is easy to master, fast to apply, and that provides a high return on the investment rate. I refer to chart titles. In this talk, I will show that a proper chart title not only improves the comprehensibility of a chart, but also leads to better charting decisions, and holds the presenter responsible.

  • 15:30 (EEST) - 16:10
    Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them. In this session, Francesca will go over a few methods and tools that enable you to "unpack” machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues. Using open-source fairness and interpretability packages, attendees will learn how to: * Explain model prediction by generating feature importance values for the entire model and/or individual datapoints. * Achieve model interpretability on real-world datasets at scale, during training and inference. * Use an interactive visualization dashboard to discover patterns in data and explanations at training time. * Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.

  • 16:20 (EEST) - 17:00
    Research and knowledge generation processes have always – and increasingly - been data-heavy. At the same time, a successful research product must be easy to understand and use for further decision-making. Sustainalytics will share its experience of using Natural Language Generation technology to start generating elements of sustainability-focused research products based on quantitative data-heavy inputs.

  • 17:00 (EEST) - 17:40
    We expect AI systems to provide us with insights, and augment, or even replace our decision-making processes. The current state of the art in AI, Deep Learning, while powerful and promising, only solves part of the problem. Paired with the creativity of Evolutionary Computation, AI has the potential for ubiquity, but the challenge is not just the core technology. We are still where Computing was in the early eighties, or the Web, in the mid-nineties. What other steps do we need to take in order to get AI to a point of ubiquitous use, and once we do, what is our role? I will present an AI technologist's perspective on the above.

  • 09:45 (EEST) - 10:25
    if you wonder what is next in the evolution towards general AI then this session is for you. We have seen some painful failures of artificial intelligence pointing to a lack of 'common sense'. Are neural networks really the solution we seek or is a new path needed? And what role can AI play in tackling some of society's largest challenges? Wouter Denayer will share the latest innovations coming out of IBM Research as well as his perspectives and thought-provoking ideas which will highlight how humans and machines can work together.

  • 10:25 (EEST) - 11:00
    We all love linear regression for its interpretability: Increase square meters by 1, that leads to the rent going up by 8 euros. A human can easily understand why this model made a certain prediction. Complex machine learning models like tree aggregates or neural networks usually make better predictions, but this comes at a price: it's hard to understand these models. In this talk, we'll look at a few common problems of black-box models, e.g. unwanted discrimination or unexplainable false predictions ("bugs"). Next, we go over three methods to pry open these models and gain some insights into how and why they make their predictions. We'll conclude with a few predictions about the future of (interpretable) machine learning. Specifically, the topics covered are: What makes a model interpretable? Linear models, trees How to understand your model Model-agnostic methods for interpretability Permutation Feature Importance Partial dependence plots (PDPs) Shapley Values / SHAP The future of (interpretable) machine learning

  • 11:15 (EEST) - 11:55
    Selling a house in Italy takes on average 3 to 4 months. Casavo is the first Italian instant buyer, and it promises clients that they can sell their house in 30 days. Of course this is not possible without a strong technological infrastructure that is permanently supporting the business in reaching its targets. This talk will present some efficient ways of storing and modelling real-estate data. In particular, it will describe some libraries that are useful for finding and storing geospatial data. Furthermore, different modelling techniques that are used in order to get an accurate estimation of an apartment's price, will be presented.

  • 11:55 (EEST) - 12:30
    Artificial Intelligence and Machine Learning can be used in many ways to increase productivity of business processes and gather meaningful insights, by analyzing images, texts and trends within unstructured flows of data. While many tasks can be solved using existing models, in some cases it is also required to train your own model for more specific tasks, or for increased accuracy. In this session, we will explore the complete path of integrating text analysis intelligent services into the business processes of Tailwind Traders, starting from pre-build models available as cognitive services, up to training a third-party neural custom model for Aspect-Based Sentiment Analysis available as part of Intel NLP Architect using Azure Machine Learning Service. We will talk about cases when one needs a custom model and show how to fine-tune model hyperparameters using HyperDrive

  • 13:30 (EEST) - 14:10
    The amount of data being collected every day in the world is mind-blowing. Business data is becoming like a digital supermarket where all sorts of information are collected and stored. Many of those data points are repeatedly recorded numeric values, i.e. time series. From the business point of view, an accurate forecast enables a company to adequately allocate resources and/or to react to an anomaly. In this talk, we will focus on forecasting of time series having multiple seasonalities driven by human behavior, long enough history (>2 periods), and high predictability (low noise). We will overview forecasting algorithms: from classical statistical methods (ARIMA, exponential smoothing) through more recent developments (Facebook's Prophet, Bayesian Structural Time Series) to machine learning approaches (recurrent and convolutional neural networks). We will discuss recent Kaggle and M3/M4 competitions and demonstrate the strengths and weaknesses of the methods on publicly available Kaggle dataset (Web Traffic Time Series Forecasting).

  • 14:10 (EEST) - 14:45
    "Anything that can go wrong will go wrong" – The Murphy’s Law doesn’t apply only to everyday life. Working on AI project is not safe (at all) and I frequently meet my “old friend” Murphy. Discover with me all that can (and will) go wrong on an image classification or object detection task with convolutional neural networks. We will go through this “Murphy’s project”, seeing at each step what are the main pitfalls, how to detect them and (if possible) how to avoid them.

  • 15:00 (EEST) - 15:40
    Many data scientists spend the bulk of their time doing tasks that do not make full use of their capabilities. Long hours are spent on e.g. preparing and selecting features or making small, incremental model changes. This talk explores how some of the more drudging work can be handed over to computers so that data scientists can focus on the exciting and high value tasks that they, as humans, are uniquely positioned for.

  • 15:40 (EEST) - 16:15
    An adapted state-of-the-art method is used to extract the approximate governing equations from sensor measurements. The model is used to predict the temperature evolution of a pump.

  • 16:30 (EEST) - 17:05
    We will discuss AI-powered personalization at Facebook's scale: the challenges and practical techniques applied to overcome these challenges. Attendees will learn key design requirements for personalization at Facebook scale, and the techniques applied to meet these requirements: from modern deep learning modeling techniques, through distributed training approaches and up to the system architecture designed for training and inference.

  • 17:10 (EEST) - 17:50


Community Partners



Chopin Hall - Palas Congress Hall, Palas Street no. 7A, Iași, Romania 700259