Deep Neural Pipeline for Churn Prediction

Posted Posted in Blog

Customer churn is an essential retail metric used in business predictive analytics systems to quantify the number of customers who left a company. All retail and business to consumer companies carefully analyze customer behavior to prevent them to cease their relationship with the company, in other words to make churn. With the latest advancements of Artificial Intelligence and particularly related to Deep Learning, we have a new set of powerful tools ready to employ within multiple horizontal and vertical domain – such as the horizontal of predictive business analytics domain. One of the main goals of predictive analytics is the research and development of the almost-perfect churn detection system. This paper objective is to propose a state-of-the-art churn prediction model based on deep neural models, time-to-next-event models and employing Big Data processing on massive parallel computing using GPU cells.

   Keywords — machine learning, business predictive analytics, massive parallel computing, on GPU, deep learning, customer retention, big data, churn prediction

Reference –

Deep recommender engine based on efficient product embeddings neural pipeline

Posted Posted in Blog

Predictive analytics systems are currently one of the most important areas of research and development within the Artificial Intelligence domain and particularly in Machine Learning. One of the “holy grails” of predictive analytics is the research and development of the “perfect” recommendation system. In our paper we propose an advanced pipeline model for the multi-task objective of determining product complementarity, similarity and sales prediction using deep neural models applied to big-data sequential transaction systems. Our highly parallelized hybrid pipeline consists of both unsupervised and supervised models, used for the objectives of generating semantic product embeddings and predicting sales, respectively. Our experimentation and benchmarking have been done using very large pharma-industry retailer Big Data stream.

   Keywords — recommender systems; efficient embeddings; machine learning; deep learning; big-data; high-performance computing, GPU computing.

Reference –

Cloudifier Virtual Apps – virtual desktop predictive analytics apps environment based on GPU computing framework

Posted Posted in Blog

The need for systems capable of conducting inferential analysis and predictive analytics is ubiquitous in a global information society. With the recent advances in the areas of predictive machine learning models and massive parallel computing a new set of resources is now potentially available for the computer science community in order to research and develop new truly intelligent and innovative applications. In our research we present the principles, architecture and current experimentation results for an online platform capable of both hosting and generating intelligent applications – applications with predictive analytics capabilities.

   Keywords — artificial intelligence, machine learning ,virtual desktop, predictive analytics

Reference –

Model Architecture for Automatic Translation and Migration of Legacy Applications to Cloud Computing Environments

Posted Posted in Blog

On-demand computing, Software-as-a-Service, Platform-as-a-Service, and in general Cloud Computing is currently the main approach by which both academic and commercial domains are delivering systems and content. Nevertheless there still remains a huge segment of legacy systems and application ranging from accounting and management information systems to scientific software based on classic desktop or simple client-server architectures. Although in the past years more and more companies and organizations have invested important budgets in translating legacy apps to online cloud-enabled environment there still remains an important segment of applications that for various reasons (budget related in most cases) have not been translated. This paper proposes an innovative pipeline model architecture for automated translation and migration of legacy application to cloud-enabled environment with a minimal software development costs.

   Keywords — automatic programming; cloud computing; migration; machine-learning; automatic translation.


What makes a good Data Scientist?

Posted Posted in Blog

Yesterday I was asked “What makes a great Data Scientist?” so I gave a few short quick-fire answers trying to replace “great” with “good” in the question:
“Knowing when it is ok to apply a tree-based boosting machine to a regression problem and why you might need non-deterministic tree training.”
“Knowing how to merge a deep vision technique and a neural NLP one in order to obtain a powerful business predictive analytics model.”

“Understanding how a business process generates data,
how that data can be extracted, how the extracted data can be remodeled
in order for it to be fed to state-of-the-art models, and finally, how that trained model
will come back to the business process with real improvements.”

“Finding the best pre-trained classification model for the task of preparing and clustering unlabeled cross-domain data.”

“Not getting attached to your beloved framework and being flexible enough to understand that you might need to do your initial research in Keras, train the model with an exotic constrained loss in TensorFlow in a carefully babysitting manner and then deploy the model in production on your TX2 using TensorRT.”

Then, after thinking more thoroughly, I gave a more extensive and broader answer and it actually become increasingly apparent that the definition of the “Data Scientist” concept evolves day-after-day, together with the slow adoption of big-data machine learning in our lives – at work and at home. What I think is really important, probably the most vital consideration, is the combination between multi-domain experience and a firm resolve to continuously explore, understand and evolve.

Multi-domain (or cross-domain if you want) experience allows the Data Scientist to truly understand the natural processes that are being observed. Obviously enough, following the ‘understanding’ comes the actual hypothesis formulation and testing stage and then the rest of the usual steps.

I have seen a myriad of data science experts and machine learning practitioners with a wealth of experience making statements such as: “the best model can be a simple logistic regression with well-chosen features” or “using a deep neural model only increases by a few percent the model accuracy versus the shallow model based on well-chosen-features” or “the self-explain ability of the linear model cannot be traded for the black-box-style of a neural model”. While there is a high probability that most of the time this is true, over the past 5 years it has been repeatedly demonstrated that new deep-learning research from various domains can be successfully applied to another domain. Take for example deep neural models from NLP research such as word2vec ( or GloVe ( applied with great proven success in recommender-system by various research groups; the list could continue, but that is beyond the point of this post.

To conclude, I think the Data Scientist profile must have the optimal combination between several intersecting domains (perhaps the diagram above is a clearer representation of these concepts) and should not exclude – under any circumstances – the active research and continuous analysis of the current state-of-the-art in Deep Learning and adjacent areas. I really don’t think there are any short-cuts nor acceptable uncovered knowledge zones.