Statistics homework 2
Theory:
-
Describe the most common configuration of data repositories in the real world and corporate environment. Concepts such as Operational or Transactional systems (OLTP), Data Warehouse DW, Data Marts, Analytical and statistical systems (OLAP), etc. Try to draw a conceptual picture of how all these components may work together and how the flow of data and information is processed to extract useful knowledge from raw data.
-
Show how we can obtain an online algo for the arithmetic mean and explain the various possible reasons why it is preferable to the “naive” algo based on the definition.
Practice:
-
Create - in both languages C# and VB.NET - a demonstrative program which computes the online arithmetic mean (if it’s a numeric variable) and your own algo to compute the distribution for a discrete variable and for a continuous variable (can use values simulated with RANDOM object).
-
Create an object providing a rectangular area which can be moved and resized using the mouse. This area will hold our future charts and graphics.
Practice theory:
- Understand how the floating point representation works and describe systematically (possibly using categories) all the possible problems that can happen. Try to classify the various issues and limitations (representation, comparison, rounding, propagation, approximation, loss of significance, cancellation, etc.) and provide simple examples for each of the categories you have identified
Theory 1
Every statistical analysis starts to raw data, but where is this data stored?
The first system in which data is stored is called Online Transactional Processing Unit (OLTP) where every write, read or update operation is an atomic transaction, an example of this kind of system would be a DBMS. Operational Data systems are used to store raw data because support hig-volume and low-latency access.
This raw data flows from online transactional systems to those called Online Analytical Processing Units (OLAP) or data warehouse that are data systems designed for heavy aggregation, data mining, and ad hoc queries. A description of these systems by IBM [1] says also that this kind of systems supports artificial intelligence and machine learning used to make sophisticated analysis. These systems can be used by other systems called data marts, which make a subset of the aggregated data more accessible to interested users or groups that perform more detailed analysis.
These two systems are connected by a data stream and another repository of data called data lake. Data flows from multiple OLTP systems throughout data streams to a big data lake that stores large quantities of raw data from multiple sources. OLAP systems are in turn connected to data lake to aggregate and store clean and processed data.
Theory 2
Online arithmetic mean
We know that the normal mean for n values is:
\[\bar{x}_{n} = \frac{1}{n} \cdot \sum_{i = 1}^{n} x_{i}\]If we got another value for which calcualte the average with the equation above we should recalculate it and for this purpose we should store all values. This is not so efficient beacause with a large amount of data it’s possible to fill the memory and no longer be able to calculate the average.
What we need in case of a data stream of unknown quantity is an online algorithm and we can rework the equation above as such:
\[\bar{x}_{n-1} = \frac{1}{n-1} \cdot \sum_{i = 1}^{n-1} x_{i} \implies (n-1) \cdot \bar{x}_{n-1} = \sum_{i = 1}^{n-1} x_{i}\]If i got a new value \(x_{n}\) for which calculate the mean i should sum all \(x_{i} \ \forall i \in [1, n]\) that it’s equal to:
\[x_{1} + x_{2} + \dots + x_{n} = (n-1) \cdot \bar{x}_{n-1} + x_{n}\]and after i should divide for the new number of elements:
\[\bar{x}_{n} = \frac{x_{1} + x_{2} + \dots + x_{n}}{n} = \frac{(n-1) \cdot \bar{x}_{n-1} + x_{n}}{n}\]Lets make some arithmetic semplification of the new formula:
\[\bar{x}_{n} = \frac{(n-1) \cdot \bar{x}_{n-1} + x_{n}}{n} =\] \[= \frac{(n-1) \cdot \bar{x}_{n-1}}{n} + \frac{x_{n}}{n} =\] \[= \bar{x}_{n-1} - \frac{\bar{x}_{n-1}}{n} + \frac{x_{n}}{n} =\] \[= \bar{x}_{n-1} + \frac{1}{n} (x_{n} - \bar{x}_{n-1})\]So the online algorithm for the average is:
\[\bar{x}_{n} = \bar{x}_{n-1} + \frac{1}{n} (x_{n} - \bar{x}_{n-1})\]Why the online mean is better?
The online arithmetic mean is better for two reasons.
The first regards the memory used, in fact with the common mean we need to store all \(x_{i}\) and recompute the mean every time we get a new value. On the other hand, with the online algorithm, we only need to store one value that is the current average value \(\bar{x}_{n}\).
The second reason regards the machine precision and the error in machine computations. Each machine has an \(\mu\) called machine precision, which is the smallest number such that the difference between 1 and 1 + \(\mu\) is nonzero. Therefore, computers cannot represent every \(x \in \mathbb{R}\) in fact between 2 consecutive machine numbers (numbers recognized by machine) exist a small intervall of non machine numbers. This implies that the result of an operation between two machine numbers may not be a machine number.
In the following sections we are going to show tools and methods to analyze the algorithmic error in functions computation, and we will analyze both algorithms considering only linear functions of the errors and excluding functions of greater grades like quadratic or cubic (first order error analysis). We assume that all input values are already machine numbers, therefore we assume that the local error \(\epsilon_{x_{i}} \ \forall i \in [1, n]\) is 0.
Error analysis tools
The local error of representation af a number \(x\) as machine number \(\tilde{x}\) is this:
\[\frac{\tilde{x} - x}{x} = \epsilon_{x}\]Which implies that \(\tilde{x} = x(1 + \epsilon_{x})\):
And the alghoritmic error in calulation of function \(f(x)\) is this:
\[\epsilon_{alg} = \frac{\psi(\tilde{x}) - f(\tilde{x})}{f(\tilde{x})}\]where \(\psi(\tilde{x})\) is the machine value computed and \(\tilde{x}\) is the machine number of input value.
Common average algorithm analysis
Algorithm:
\[\bar{x}_{n} = \frac{x_{1} + x_{2} + \dots + x_{n-1} + x_{n}}{n}\]In this case I show directly the algorithm stability since the error analysis should be done with graph analysis:
\[|\epsilon_{alg}| \leq n\mu\]In case of classic average the algorithmic error \(\epsilon_{alg}\) increases as \(n\) increases. This means that for hig value of \(n\) the algorithmic error in the worst case is very hig:
\[\lim_{n \to \infty } n\mu = \infty\]Online average algorithm analysis
Algorithm:
\[\bar{x}_{n} = \bar{x}_{n-1} + \frac{1}{n} (x_{n} - \bar{x}_{n-1})\]Error analysis:
\[\psi(x) = (\bar{x}_{n-1}(1 + \eta) + \frac{1}{n}((x - \bar{x}_{n-1}(1 + \eta))(1 + \epsilon_{1}))(1 + \epsilon_{2}))(1 + \epsilon_{3}) =\] \[= \bar{x}_{n-1} + \bar{x}_{n-1}\eta + \frac{1}{n}(x - \bar{x}_{n-1} + x\eta - \bar{x}_{n-1}\eta + x\epsilon_{1} - \bar{x}_{n-1}\epsilon_{1} + x\epsilon_{2} - \bar{x}_{n-1}\epsilon_{2}) + \bar{x}_{n-1}\epsilon_{3} + \epsilon_{3}\frac{1}{n}(x - \bar{x}_{n-1})=\] \[= \frac{n\bar{x}_{n-1} + n\bar{x}_{n-1}\eta + x - \bar{x}_{n-1} + x\eta - \bar{x}_{n-1}\eta + x\epsilon_{1} - \bar{x}_{n-1}\epsilon_{1} + x\epsilon_{2} - \bar{x}_{n-1}\epsilon_{2} + n\bar{x}_{n-1}\epsilon_{3} + x\epsilon_{3} - \bar{x}_{n-1}\epsilon_{3}}{n}=\] \[= \frac{n\bar{x}_{n-1} + n\bar{x}_{n-1}\eta + x - \bar{x}_{n-1} + x\eta - \bar{x}_{n-1}\eta + x\epsilon_{1} - \bar{x}_{n-1}\epsilon_{1} + x\epsilon_{2} - \bar{x}_{n-1}\epsilon_{2} + n\bar{x}_{n-1}\epsilon_{3} + x\epsilon_{3} - \bar{x}_{n-1}\epsilon_{3}}{n}=\]Lets apply the algorithmic error formula:
\[\epsilon_{alg} = \frac{\frac{1}{n}((x - \bar{x}_{n-1})(1 + \epsilon_{1} + \epsilon_{2} + \epsilon_{3} + \eta) + n\bar{x}_{n-1}(1 + \eta + \epsilon_{3})) - \bar{x}_{n-1} - \frac{1}{n}(x - \bar{x}_{n-1})}{\bar{x}_{n-1} + \frac{1}{n}(x - \bar{x}_{n-1})} =\] \[= \frac{\frac{1}{n}((x - \bar{x}_{n-1})(1 + \epsilon_{1} + \epsilon_{2} + \epsilon_{3} + \eta) + n\bar{x}_{n-1}(1 + \eta + \epsilon_{3}) - n\bar{x}_{n-1} - x + \bar{x}_{n-1})}{\bar{x}_{n-1} + \frac{1}{n}(x - \bar{x}_{n-1})} =\] \[= \frac{\frac{1}{n}((x - \bar{x}_{n-1})(\epsilon_{1} + \epsilon_{2} + \epsilon_{3} + \eta) + n\bar{x}_{n-1}(\eta + \epsilon_{3}))}{\bar{x}_{n-1} + \frac{1}{n}(x - \bar{x}_{n-1})} =\] \[= \frac{\frac{1}{n}((x - \bar{x}_{n-1})(\epsilon_{1} + \epsilon_{2} + \epsilon_{3} + \eta) + n\bar{x}_{n-1}(\eta + \epsilon_{3}))}{\frac{1}{n}(n\bar{x}_{n-1} + x - \bar{x}_{n-1})} =\] \[= \frac{(x - \bar{x}_{n-1})(\epsilon_{1} + \epsilon_{2} + \epsilon_{3} + \eta) + n\bar{x}_{n-1}(\eta + \epsilon_{3})}{x + (n-1)\bar{x}_{n-1}} =\] \[\frac{\epsilon_{3}(x + (n-1)\bar{x}_{n-1}) + \eta(x + (n-1)\bar{x}_{n-1}) + \epsilon_{2}(x - \bar{x}_{n-1}) + \epsilon_{1}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}}\]The absolute value of the error is at most machine precision that is \(\mu\):
\[|\epsilon_{alg}| = \left|\frac{\epsilon_{3}(x + (n-1)\bar{x}_{n-1}) + \eta(x + (n-1)\bar{x}_{n-1}) + \epsilon_{2}(x - \bar{x}_{n-1}) + \epsilon_{1}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}}\right| =\] \[= \left|\frac{\epsilon_{3}(x + (n-1)\bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}} + \frac{\eta(x + (n-1)\bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}} + \frac{\epsilon_{2}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}} + \frac{\epsilon_{1}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}}\right| =\] \[= \left|\epsilon_{3} + \eta + \frac{\epsilon_{2}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}} + \frac{\epsilon_{1}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}}\right| \leq\] \[\leq |\epsilon_{3}| + |\eta| + \left|\frac{\epsilon_{2}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}}\right| + \left|\frac{\epsilon_{1}(x - \bar{x}_{n-1})}{x + (n-1)\bar{x}_{n-1}}\right| \leq\] \[\leq \mu\left( 2 + 2\frac{|x - \bar{x}_{n-1}|}{|x + (n-1)\bar{x}_{n-1}|}\right)\]Concluding we have that:
\[|\epsilon_{alg}| \leq \mu\left( 2 + 2\frac{|x - \bar{x}_{n-1}|}{|x + (n-1)\bar{x}_{n-1}|}\right) \leq 4\mu\]In this case the error at the worst case is constant.
Conclusions
The online version of the average algorithm is more stable with hig numbers of parameters and it’s more efficient in terms of memory occupation, therefore the online version of the algorithm is preferable.
References
[1] https://www.ibm.com/cloud/learn/data-mart
Practice 1
Online arithmetic mean demostrative program
The online arithmetic mean demonstrative program is really short:
namespace Online_Arithmetic_Mean
{
public partial class Form1 : Form
{
private double mean;
private int data_number;
private bool running_data_stream;
private readonly Random random;
public Form1()
{
InitializeComponent();
mean = 0.0;
data_number = 0;
running_data_stream = false;
random = new Random();
}
private void Button1_Click(object sender, EventArgs e)
{
if (running_data_stream)
{
timer1.Stop();
running_data_stream = false;
} else
{
timer1.Start();
running_data_stream = true;
}
}
private void Timer1_Tick(object sender, EventArgs e)
{
double new_val = random.NextDouble() * 50;
data_number += 1;
// adding item to data list
listBox1.Items.Add(new_val);
listBox1.SelectedIndex = listBox1.Items.Count - 1;
listBox1.SelectedIndex = -1;
// computing the online average
mean += 1 / (double)data_number * (new_val - mean);
richTextBox1.Text = "Number of items:".PadRight(25) + data_number.ToString() + '\n' +
"Mean:".PadRight(30) + mean.ToString();
}
}
}
This is the demonstrative program in action:
For the distribution library this is the code:
using System;
using System.Collections.Generic;
namespace Distributions
{
public interface Distribution<T>
{
public int totalRecordings();
public void updateFrequency(T value);
public Dictionary<T, double> getDistribution();
}
public class DiscreteDistribution<T> : Distribution<T>
{
protected Dictionary<T, int> frequencies;
public DiscreteDistribution()
{
frequencies = new Dictionary<T, int>();
}
/// <summary>
/// Returnsa the number of total submission of values
/// </summary>
/// <returns>the number of total submission of values</returns>
public int totalRecordings()
{
int sum = 0;
foreach (KeyValuePair<T, int> frequency in frequencies)
{
sum += frequency.Value;
}
return sum;
}
/// <summary>
/// Returns probability distribution
/// </summary>
/// <returns>the probability distribution</returns>
public Dictionary<T, double> getDistribution()
{
Dictionary<T, double> distribution = new Dictionary<T, double>();
int total = totalRecordings();
foreach (KeyValuePair<T, int> entry in frequencies)
{
double frequency = entry.Value / (double)total;
distribution.Add(entry.Key, frequency);
}
return distribution;
}
/// <summary>
/// updates frequencies for new value
/// </summary>
/// <param name="value">the value for which update the requencies</param>
public virtual void updateFrequency(T value)
{
if (!frequencies.TryAdd(value, 1)) {
// value key already exists
frequencies[value] += 1;
}
}
}
public class ContinuousDistribution : DiscreteDistribution<double>
{
private double intervalWidth;
private double centralItem;
private int precision;
/// <summary>
/// Creates ContinuousDistribution instance
/// </summary>
/// <param name="intervalWidth">is the interval width between teo consecutives interval bounds</param>
/// <param name="centralItem">is the start bound</param>
public ContinuousDistribution(double intervalWidth, double centralItem)
{
this.intervalWidth = intervalWidth;
this.centralItem = centralItem;
// computes the number of decimals digits
precision = 0;
double auxIntervalWidth = intervalWidth;
while ((int)auxIntervalWidth % 10 == 0)
{
auxIntervalWidth *= 10;
precision++;
}
// adds the central item of the distribution
frequencies.Add(centralItem, 0);
}
/// <summary>
/// Given a value searches and creates interval starting from centralItem
/// and returns the left bound of the interval to which the given value belongs
/// </summary>
/// <param name="value"> the value to search the membership range</param>
/// <returns>the left bound of the interval</returns>
private double searchInterval(double value)
{
double key;
double distance = Math.Abs(centralItem - value);
if (distance < intervalWidth) {
// inside the interval with bound either left or right the centralItem
key = (value < centralItem) ? Math.Round(centralItem - intervalWidth, precision) : Math.Round(centralItem + intervalWidth, precision);
if (!frequencies.ContainsKey(key))
{
frequencies.Add(key, 0);
}
return key;
}
// outside the interval with bound either left or right the centralItem
if (value > centralItem)
{
// looking for intervals greater than (centralItem + intervalWidth)
key = centralItem;
while (value >= key) {
// creates new intervals until it finds the correct one
if (!frequencies.ContainsKey(key))
{
frequencies.Add(key, 0);
}
key = Math.Round(key + intervalWidth, precision);
}
key = Math.Round(key - intervalWidth, precision);
} else
{
// looking for intervals smaller than (centralItem - intervalWidth)
key = centralItem;
while (value <= key)
{
// creates new intervals until it finds the correct one
if (!frequencies.ContainsKey(key))
{
frequencies.Add(key, 0);
}
key = Math.Round(key - intervalWidth, precision);
}
key = Math.Round(key + intervalWidth, precision);
}
return key;
}
/// <summary>
/// updates frequencies for new value
/// </summary>
/// <param name="value">the value for which update the requencies</param>
public override void updateFrequency(double value)
{
double key = searchInterval(value);
frequencies[key] += 1;
}
}
}
This is the unit test:
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Distributions;
using System;
using System.Collections.Generic;
namespace DistributionTest
{
[TestClass]
public class DiscreteDistributionTest
{
[TestMethod]
public void Test()
{
DiscreteDistribution<int> distribution = new DiscreteDistribution<int>();
Random random = new Random();
int records_number = 50;
for (int i = 0; i < records_number; i++)
{
distribution.updateFrequency(random.Next(1, 10));
}
double sum = 0;
foreach (KeyValuePair<int, double> entry in distribution.getDistribution())
{
sum += entry.Value;
}
Assert.AreEqual(1, Math.Round(sum));
Assert.AreEqual(records_number, distribution.totalRecordings());
}
}
[TestClass]
public class ContinuousDistributionTest
{
[TestMethod]
public void Test()
{
Random random = new Random();
int records_number = 50;
int min = 10;
int max = 200;
ContinuousDistribution distribution = new ContinuousDistribution(0.1, (200 + 10)/2);
for (int i = 0; i < records_number; i++)
{
distribution.updateFrequency(random.NextDouble() * (max - min) + min);
}
double sum = 0;
foreach (KeyValuePair<double, double> entry in distribution.getDistribution())
{
sum += entry.Value;
}
Assert.AreEqual(1, Math.Round(sum));
Assert.AreEqual(records_number, distribution.totalRecordings());
}
}
}
Practice 2
Code
This is the winform’s code in c#:
using System;
using System.Drawing;
using System.Windows.Forms;
namespace MovableRectangle
{
public partial class Form1 : Form
{
private Bitmap bitmap; // Our Bitmap declaration
private Rectangle rect;
private int width;
private int height;
private bool dragging = false;
private bool resizing = false;
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
Graphics graphicsObj;
bitmap = new Bitmap(this.pictureBox1.ClientRectangle.Width,
this.pictureBox1.ClientRectangle.Height);
graphicsObj = Graphics.FromImage(bitmap);
rect = new Rectangle(0, 0, this.pictureBox1.ClientRectangle.Width, this.pictureBox1.ClientRectangle.Height);
Rectangle rect2 = new Rectangle(
rect.X + 10,
rect.Y + 10,
rect.Width - 20,
rect.Height -20
);
Pen myPen = new Pen(System.Drawing.Color.Black, 3);
graphicsObj.DrawRectangle(myPen, rect);
graphicsObj.DrawRectangle(myPen, rect2);
graphicsObj.Dispose();
this.pictureBox1.Image = bitmap;
}
private void pictureBox1_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
Rectangle innerArea = new Rectangle(
rect.X + 10,
rect.Y + 10,
rect.Width - 20,
rect.Height - 20
);
if (innerArea.Contains(e.Location))
{
dragging = true;
width = rect.X - e.X;
height = rect.Y - e.Y;
} else
{
width = e.X;
height = e.Y;
resizing = true;
Console.WriteLine("resizing");
}
}
}
private void pictureBox1_MouseMove(object sender, MouseEventArgs e)
{
if (dragging)
{
pictureBox1.Left += e.X + width;
pictureBox1.Top += e.Y + height;
pictureBox1.Refresh();
}
if (resizing)
{
pictureBox1.Size = new Size(rect.Width + (e.X - width), rect.Height + (e.Y - height));
width = e.X;
height = e.Y;
pictureBox1.Invalidate();
Graphics graphicsObj;
bitmap = new Bitmap(this.pictureBox1.ClientRectangle.Width,
this.pictureBox1.ClientRectangle.Height);
graphicsObj = Graphics.FromImage(bitmap);
rect = new Rectangle(0, 0, this.pictureBox1.ClientRectangle.Width, this.pictureBox1.ClientRectangle.Height);
Rectangle rect2 = new Rectangle(
rect.X + 10,
rect.Y + 10,
rect.Width - 20,
rect.Height - 20
);
Pen myPen = new Pen(System.Drawing.Color.Black, 3);
graphicsObj.DrawRectangle(myPen, rect);
graphicsObj.DrawRectangle(myPen, rect2);
graphicsObj.Dispose();
this.pictureBox1.Image = bitmap;
pictureBox1.Refresh();
}
}
private void pictureBox1_MouseUp(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
dragging = false;
resizing = false;
}
}
}
}
How it works
Download projects
Practice Theory 1
Every computer in the world represent real numbers with the floating point arithmetic which use to do computations. A floating point number is stored in a registry of bits where 1 bit is used for the sign and the remaining is divided into 2 parts: the exponent and the mantissa.
Formal theory
Let \(x \in \mathbb{R}, \ x \neq 0\) and let \(\beta\) be a representation base, exist unique:
- \(p \in \mathbb{Z}\) called exponent
- a succession of digits \(\{d_{i}\}_{i \geq 1}\) such that
- \[d_{1} \neq 0\]
- \(\nexists t : d_{i} = \beta-1\) such that \(x = sign(x) \beta^{p} \sum_{i = 1}^{\infty} d_{i}\beta^{-i}\)
from this theorem we can define the set of machine numbers which is:
\[F(\beta, t, m, M) = \{ 0 \} \ \cup \ \{x \in \mathbb{R} : x = sign(x) \beta^{p} \sum_{i = 1}^{\infty} d_{i}\beta^{-i} \\ \land \ d_{1} \neq 0 \land 0 \leq d_{i} \leq \beta-1 \land \ -m \leq p \leq M \}\]IEEE Standard
In the IEEE standard doubles are represented like this: \(F(2, 53, 1021, 1024)\) and there are 2 special configuration for the exponent:
-
\[p = 1111 \dots 11\]
- \(\sum_{i = 1}^{\infty} d_{i}\beta^{-i} = 0000 \dots 00\) then the real value is \(\pm \infty\)
- \(\sum_{i = 1}^{\infty} d_{i}\beta^{-i} = 0000 \dots 00\) the the real value is NaN (Not a Number)
- \(p = 0000 \dots 00\) are all denormalized numbers (all numbers with $$ d_{1} = 0
Limitations
Approximation
This standard is used among all computers but obviously has some limitations in terms of how accurate computations are, indeed there could be cases in which values need to be approximated to the nearest machine number. The approximation can be done in 2 ways truncating or rounding
Truncation
Suppose that a result of a computation \(x\) isn’t a machine number and stays between two consecutive machine numbers \(a\) and \(b\) where \(a < b\) the result of computation as machine number will be \(\tilde{x} = a\).
Since the distance between two consecutive numbers is \(\beta^{p - t}\) the error of this approximation is at most this value.
Rounding
Suppose like above that a result of a computation \(x\) isn’t a machine number and stays between two consecutive machine numbers \(a\) and \(b\) where \(a < b\). In rounding we divide the distance between \(a\) and \(b\) by one half that is \(\frac{1}{2}\beta^{p-t}\) and if \(x > \frac{1}{2}\beta^{p-t}\) then \(\tilde{x} = b\) else \(\tilde{x} = a\).
In this case, the error is at most the half of the distance between two consecutive machine numbers: \(\frac{1}{2}\beta^{p-t}\)
Cancellation
As described in [2] the evaluation of any expression containing a subtraction could result in a relative error so large that all the digits are meaningless. There are two kinds of cancellation: catastrophic and benign.
Catastrophic
Occurs when the operands are subject to rounding errors, for example in the expression \(b^{2} - 4ac\) the two quantities are subject to a rounding error since they are the result of others floating point operations.
Benign
Occurs when we subtract values not affected by any representation error.
References
[2] https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html