An ensemble considers multiple learning models and combines them to obtain a more powerful model. Combining different models into an ensemble leads to a better generalization of the data, minimizing the chance for overfitting. A random forest is an example of an ensemble model, where multiple decision trees are considered. As this post is related to the last trends of random forest, it is assumed the reader has a background on decision trees (if no, please refer to decision-trees-in-machine-learning, a great post by Prashant Gupta).
The random forest was introduced by Leo Breiman [1] in 2001. The motivation lies in the key disadvantage of a decision tree model, where it is prone to overfitting while many leaves are created. Hence, many decision trees lead to a more stabilized model with better generalization. The random forest idea is to create lots of decision trees, where each one should predict reasonably well the target values but look different from the other trees. The “different” creation is done by adding random variations during the tree building process. These variations include the variety in the selected data of the trained model, and the features at each split test are selected randomly. …
I first came across the Kalman filter during my undergraduate studies when I took the navigation systems class. It was the last lecture, and the professor said it is out of the course syllabus, but if someone will deal with real-time applications, he is expected to meet it again. He was right, and I kept studying for a master’s degree in the field of Guidance, Control, and Navigation (GCN) at the Aerospace Engineering Faculty of the Technion. I came across the Kalman filter again, where I used it to filter noisy measurements from various sensors during real-time navigation problems. Later on, I used different Kalman filter extensions while exploring solutions for real-life problems, such as Extended Kalman Filter, Iterated EKF, etc. Today, I am pursuing my Ph.D. …
The calculation of curve length is one of the most major components in many modern and classical problems. For example, a handwritten signature involves the computation of the length along the curve (Ooi et al.). When one handles the challenge of length computation in real-life problems he faces several constraints such as additive noise, discretization error, and partial information. In this post, we review our work, a preprint is available online:
https://www.researchgate.net/publication/345435009_Length_Learning_for_Planar_Euclidean_Curves
In this current work, we address a fundamental question in the field of geometry where we aim to reconstruct a basic property using DNN. The simplest geometric object is a curve, and a simple metric to evaluate a curve is the length. There are many close form expressions for calculation of the length and other geometric properties in the classical literature (Kimmel, 2003). However, since we know the powerful functions of DNN, we are highly motivated to reconstruct the length of curve (arclength) property by designing a DNN. …
COVID-19 pandemic has affected the entire world. Many people lost their jobs, kids stay at home, and the economic crisis is disastrous. The question of “how will the world be after COVID-19” is of high interest. Many futurists predict a different world, where we should rethink public spaces and believes that the memory of the COVID-19 lockdown will remain for a long time (Del Bello, 2020). Google collects and arranges worldwide data, as you can see the daily new cases and deaths:
In this post, we focus on deep learning for sequential data techniques. All of us familiar with this kind of data. For example, the text is a sequence of words, video is a sequence of images. More challenging examples are from the branch of time series data, with medical information such as heart rate, blood pressure, etc., or finance, with stock price information. The most common AI approaches for time-series tasks with deep learning is the Recurrent Neural Networks (RNNs). The motivation to use RNN lies in the generalization of the solution with respect to time. As sequences have different lengths (mostly), a classical deep learning architecture such as Multy Layers Perceptrons (MLP) can not be applied without modifying it. Moreover, the number of weights in MLP is absolutely huge! Hence, The RNN is commonly used, where the weights are shared during the entire architecture. …
On Monday, October 5, Python releases a new stable version, 3.9.0rc2. If you are interested in the source page of Python, it is available at this link: whatsnew/3.9. In this post, we review the release highlights, new features, new modules, optimization, and provide some source code to try it in your own environment. Moreover, we refer to some additional reading and implementation sources.
This post deals with the key parameter I found as a high influence: the discount factor. It discusses the time-based penalization to achieve better performances, where discount factor is modified accordingly.
I assume that if you land on this post, you are already familiar with the RL terminology. If it is not the case, then I highly recommend these blogs which provide a great background, before you continue: Intro1 and Intro2.
The discount factor, 𝛾, is a real value ∈ [0, 1], cares for the rewards agent achieved in the past, present, and future. In different words, it relates the rewards to the time domain. …
This post reviews the latest innovations of TCN based solutions. We first present a case study of motion detection and briefly review the TCN architecture and its advantages over conventional approaches such as Convolutional Neural Networks (CNN) and Recurrent Neural Network (RNN). Then, we introduce several novels using TCN, including improving traffic prediction, sound event localization & detection, and probabilistic forecasting.
A brief review of TCN
The seminal work of Lea et al. (2016) first proposed a Temporal Convolutional Networks (TCNs) for video-based action segmentation. The two steps of this conventional process include: firstly, computing of low-level features using (usually) CNN that encode spatial-temporal information and secondly, input these low-level features into a classifier that captures high-level temporal information using (usually) RNN. The main disadvantage of such an approach is that it requires two separate models. …
Many vision aiding navigation approaches were presented in the last decade, as there is a wide range of applications these days (Huang, 2019). In other words, the classical field of inertial navigation with low-cost inertial sensors as the only source of information has begun to receive attention from the novel deep learning methods involved. The main problem of inertial navigation is drift, which is a crucial source for error. More problems involve wrong initialization, incorrect sensors modeling, and approximation errors.
In this post, we reviewed the integration of deep learning and inertial measurement unit (IMU) in the classic inertial navigation system (INS), which only solved some of the above problems. First, we present some cutting edge architectures to improve speed estimation, noise reduction, zero-velocity detection, and attitude & position prediction. Secondly, the KITTI and OxIOD datasets are discussed. …
With the recent increase in general UAV usage (both flying and ground-based vehicles), the ability to autonomously navigate at previously unknown and unexplored locations has become a task of paramount importance. Robots capable of exploring standard and extreme environments (such as caves, damaged buildings, etc.) are receiving increasing attention. One solution that has been considered is the Simultaneous Localization and Mapping (SLAM), where a vehicle can simultaneously explore and draw its environment.
Visual SLAM using sensor camera arrays has received widespread attention in both academia and industry, partially due to the rapid improvement of computer vision technology. Cameras, however, are severely limited by several factors, such as their considerable computational demand, and their inability to operate under harsh lighting conditions, which can be a severe limitation for missions. …
About