Join DevTalks Cluj and attend technical hands-on workshops!

17 April 2018 devTalks

Most of the times, one keynote is not enough to cover all the awesome things you can build with a new technology or tool. So we want to fill up this need with very technical hands-on workshops where the trainers can really get into details about what these new technologies can do. The ROCKSTAR or UNICORN ticket gives you exclusive access to one workshop of your choice, so check them out and book your seat!

  • IoT Data Analytics appliance for the EU energy market with Dragos Nicola, Data Scientist & Remus Octavian Cimpean, PhD – Data Science Consultant at InflectionPoint | 14:00 – 16:00

EU commission regulates the Energy Market across Europe in terms that of all distributors have to install Smart Meters to consumption points.

This situation creates a new type of business opportunities/problems: address distributors needs to know customers’ behavior by analyzing data streams captured from the Smart Meters combined in the contractual details.

Due to the fact that EU Energy Market is a liberalized one, customers churn risk assessment becomes a crucial problem for Energy Distributors. Such a problem can be addressed only through predictive modelling, which is a matter of Data Science appliance.

Given that Energy Distributors will have IT partners for data streams connectivity and processing, the knowledge developed by a Data Science company becomes the core knowledge in this equation.

Topics for the workshop:

– energy problem description

– data science solution presentation

– tools for implementation

– demo on the data science solution

– discussion on business development in this area

– data science solution scalability add deployment into client environment

– IT/Data Science partnership framework for solution architecture and implementation

Register here!


  • A peek at Clojure with Alexandru Gherega, Computer Scientist at | 14:00 – 17:00

14:00 – 15:00

  • Setup – fine tuning
  • Clojure induction & hands-on ramp-up

15:00 – 16:00

  • core topics [REPL, types, functions, data structures, FP design patterns]
  • start-off your <first?> Clojure project
  • experiment with useful Clojure APIs (e.g. collections, trheads, transactions)

16:00 – 17:00

  • Use your new Clojure knowledge and FP passion to develop a small Machine Learning [micro]service

Register here!


  • BigData processing and Machine Learning introduction with Apache Spark with Daniel Sarbe, Development Manager, Big Data and Cloud Machine Translation @ SDL, Tudor Lapusan, BigData developer & Adrian Bona, Software Engineer at Telenav – 18th of May

This is a one-day dedicated workshop on 18th of May. The main focus of this workshop is to get you familiar with Spark core basics, like RDD, DataFrame, Transformation and Actions by working with hands-on exercises. Nevertheless we will present you others Spark capabilities, like SQL and Machine Learning using multiple examples.

All of these examples will be explained in three different programming languages (java, scala and python), so workshop participants can choose the most familiar environment.

We have a dataset offered by, from where we can build interesting examples and get meaningful insights from data.

After this workshop, you will have installed the Spark infrastructure on your own laptop and will have the necessary knowledge to continue working with Spark and try other examples.

The intended audience is software developers and technical architects that are interested to try their first examples in Big Data and machine learning using Apache Spark.

Apache Spark is the next Big Data computing framework and compared to Hadoop MapReduce, Spark is easier to use and offers a flexible computing model. Spark’s popularity is due the fact that it excels in in-memory computations, which is some cases is 10x-100x faster than Hadoop MapReduce.

Spark was designed from the ground up to support multiple processing modules, like batch processing, SQL, machine learning, streaming and graph processing. It provides a nice abstraction of large datasets with the concept of Resilient Distributed Datasets (RDD) and DataFrame, both offering elegant APIs to easily manipulate them.

Register here!

Also, have you heard the good news? If you register until April 21st, your colleague or friend will get one ticket free.  Get the discount code here