There are three types of common machine learning approaches: 1) supervised learning, where a learning system learns a latent map based on labeled examples, 2) unsupervised learning, where a learning system establishes a model for data distribution based on unlabeled examples, and 3) Reinforcement Learning, where **a decision-making system is trained to make optimal decisions. **From the designer’s point-of-view, all kinds of learning are supervised by a loss function. The sources of supervision must be defined by humans. One way to do this is by the loss function.

An ensemble considers multiple learning models and combines them to obtain a more powerful model. Combining different models into an ensemble leads to a better generalization of the data, minimizing the chance for overfitting. A random forest is an example of an ensemble model, where multiple decision trees are considered. As this post is related to the last trends of random forest, it is assumed the reader has a background on decision trees (if no, please refer to decision-trees-in-machine-learning, a great post by Prashant Gupta).

The random forest was introduced by Leo Breiman [1] in 2001. The motivation lies in…

I first came across the Kalman filter during my undergraduate studies when I took the navigation systems class. It was the last lecture, and the professor said it is out of the course syllabus, but if someone will deal with real-time applications, he is expected to meet it again. He was right, and I kept studying for a master’s degree in the field of Guidance, Control, and Navigation (GCN) at the Aerospace Engineering Faculty of the Technion. I came across the Kalman filter again, where I used it to filter noisy measurements from various sensors during real-time navigation problems. Later…

The calculation of curve length is one of the most major components in many modern and classical problems. For example, a handwritten signature involves the computation of the length along the curve (Ooi et al.). When one handles the challenge of length computation in real-life problems he faces several constraints such as additive noise, discretization error, and partial information. In this post, we review our work, a preprint is available online:

https://www.researchgate.net/publication/345435009_Length_Learning_for_Planar_Euclidean_Curves

In this current work, we address a fundamental question in the field of geometry where we aim to reconstruct a basic property using DNN. The simplest geometric object…

According to many data scientists, the most reliable model performance measure is accuracy. It is not only the definitive model metric, there are many others, too. Periodically, the accuracy might be high, but the false-negative (to be defined in the sequel) is also high. Another key measure is the F-measure common in machine learning these days, for evaluating the model performance. It proportionally combines the precision and recall measures. In this post, we explore different approaches where the imbalance of the two is suggested.

The confusion matrix summarizes the performance of a supervised learning algorithm in ML. It is more…

Kalman Filter (KF) is widely used for vehicle navigation tasks, and in particular for vehicle trajectory smoothing. One of the problems associated while applying the KF for navigation tasks is the modeling of the vehicle trajectory. For simplicity, it is convenient to choose a Constant Velocity (CV) model or a Constant Acceleration (CA) model for a wide range of tracking problems, where the position derivative is indeed the velocity and the velocity is (nearly) constant (for CV model). This advantage provides keeping dealing with a linear and stable system, as one demand from this type of tracking problem. Aside from…

COVID-19 pandemic has affected the entire world. Many people lost their jobs, kids stay at home, and the economic crisis is disastrous. The question of “how will the world be after COVID-19” is of high interest. Many futurists predict a different world, where we should rethink public spaces and believes that the memory of the COVID-19 lockdown will remain for a long time (Del Bello, 2020). Google collects and arranges worldwide data, as you can see the daily new cases and deaths:

In this post, we focus on deep learning for sequential data techniques. All of us familiar with this kind of data. For example, the text is a sequence of words, video is a sequence of images. More challenging examples are from the branch of time series data, with medical information such as heart rate, blood pressure, etc., or finance, with stock price information. The most common *AI *approaches for time-series tasks with deep learning is the Recurrent Neural Networks (RNNs). The motivation to use RNN lies in the generalization of the solution with respect to time. As sequences have different…

On Monday, October 5, Python releases a new stable version, 3.9.0rc2. If you are interested in the source page of Python, it is available at this link: ** whatsnew/3.9**. In this post, we review the release highlights, new features, new modules, optimization, and provide some source code to try it in your own environment. Moreover, we refer to some additional reading and implementation sources.

This post deals with the key parameter I found as a high influence: the discount factor. It discusses the time-based penalization to achieve better performances, where discount factor is modified accordingly.

I assume that if you land on this post, you are already familiar with the RL terminology. If it is not the case, then I highly recommend these blogs which provide a great background, before you continue: Intro1 and Intro2.

The discount factor, **𝛾, **is a real value ∈ [0, 1], cares for the rewards agent achieved in the past, present, and future. In different words, it relates the…

Founder @ ALMA, PhD Candidate, AI Researcher.