FunctionBlackScholesLibrary   "FunctionBlackScholes" 
Some methods for the Black Scholes Options Model, which demonstrates several approaches to the valuation of a European call.
// reference:
//      people.math.sc.edu
//      people.math.sc.edu
 asset_path(s0, mu, sigma, t1, n)  Simulates the behavior of an asset price over time.
  Parameters:
     s0 : float, asset price at time 0.
     mu : float, growth rate.
     sigma : float, volatility.
     t1 : float, time to expiry date.
     n : int, time steps to expiry date.
  Returns: option values at each equal timed step (0 -> t1)
 binomial(s0, e, r, sigma, t1, m)  Uses the binomial method for a European call.
  Parameters:
     s0 : float, asset price at time 0.
     e : float, exercise price.
     r : float, interest rate.
     sigma : float, volatility.
     t1 : float, time to expiry date.
     m : int, time steps to expiry date.
  Returns: option value at time 0.
 bsf(s0, t0, e, r, sigma, t1)  Evaluates the Black-Scholes formula for a European call.
  Parameters:
     s0 : float, asset price at time 0.
     t0 : float, time at which the price is known.
     e : float, exercise price.
     r : float, interest rate.
     sigma : float, volatility.
     t1 : float, time to expiry date.
  Returns: option value at time 0.
 forward(e, r, sigma, t1, nx, nt, smax)  Forward difference method to value a European call option.
  Parameters:
     e : float, exercise price.
     r : float, interest rate.
     sigma : float, volatility.
     t1 : float, time to expiry date.
     nx : int, number of space steps in interval (0, L).
     nt : int, number of time steps.
     smax : float, maximum value of S to consider.
  Returns: option values for the european call, float array of size ((nx-1) * (nt+1)).
 mc(s0, e, r, sigma, t1, m)  Uses Monte Carlo valuation on a European call.
  Parameters:
     s0 : float, asset price at time 0.
     e : float, exercise price.
     r : float, interest rate.
     sigma : float, volatility.
     t1 : float, time to expiry date.
     m : int, time steps to expiry date.
  Returns: confidence interval for the estimated range of valuation.
Statistics
FunctionMinkowskiDistanceLibrary   "FunctionMinkowskiDistance" 
Method for Minkowski Distance, 
The Minkowski distance or Minkowski metric is a metric in a normed vector space
which can be considered as a generalization of both the Euclidean distance and 
the Manhattan distance. 
It is named after the German mathematician Hermann Minkowski.
reference: en.wikipedia.org
 double(point_ax, point_ay, point_bx, point_by, p_value)  Minkowsky Distance for single points.
  Parameters:
     point_ax : float, x value of point a.
     point_ay : float, y value of point a.
     point_bx : float, x value of point b.
     point_by : float, y value of point b.
     p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev. 
  Returns: float
 ndim(point_x, point_y, p_value)  Minkowsky Distance for N dimensions.
  Parameters:
     point_x : float array, point x dimension attributes.
     point_y : float array, point y dimension attributes.
     p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev. 
  Returns: float
regressLibrary   "regress" 
produces the slope (beta), y-intercept (alpha) and coefficient of determination for a linear regression
 regress(x, y, len)  regress: computes alpha, beta, and r^2 for a linear regression of y on x
  Parameters:
     x : the explaining (independent) variable
     y : the dependent variable
     len : use the most recent "len" values of x and y
  Returns:  : alpha is the x-intercept, beta is the slope, an r2 is the coefficient of determination
Note: the chart does not show anything, use the return values to compute model values in your own application, if you wish.
FunctionNNLayerLibrary   "FunctionNNLayer" 
Generalized Neural Network Layer method.
 function(inputs, weights, n_nodes, activation_function, bias, alpha, scale)  Generalized Layer.
  Parameters:
     inputs : float array, input values.
     weights : float array, weight values.
     n_nodes : int, number of nodes in layer.
     activation_function : string, default='sigmoid', name of the activation function used.
     bias : float, default=1.0, bias to pass into activation function.
     alpha : float, default=na, if required to pass into activation function.
     scale : float, default=na, if required to pass into activation function.
  Returns: float
FunctionNNPerceptronLibrary   "FunctionNNPerceptron" 
Perceptron Function for Neural networks.
 function(inputs, weights, bias, activation_function, alpha, scale)  generalized perceptron node for Neural Networks.
  Parameters:
     inputs : float array, the inputs of the perceptron.
     weights : float array, the weights for inputs.
     bias : float, default=1.0, the default bias of the perceptron.
     activation_function : string, default='sigmoid', activation function applied to the output.
     alpha : float, default=na, if required for activation.
     scale : float, default=na, if required for activation.
@outputs float
MLActivationFunctionsLibrary   "MLActivationFunctions" 
Activation functions for Neural networks.
 binary_step(value)  Basic threshold output classifier to activate/deactivate neuron.
  Parameters:
     value : float, value to process.
  Returns: float
 linear(value)  Input is the same as output.
  Parameters:
     value : float, value to process.
  Returns: float
 sigmoid(value)  Sigmoid or logistic function.
  Parameters:
     value : float, value to process.
  Returns: float
 sigmoid_derivative(value)  Derivative of sigmoid function.
  Parameters:
     value : float, value to process.
  Returns: float
 tanh(value)  Hyperbolic tangent function.
  Parameters:
     value : float, value to process.
  Returns: float
 tanh_derivative(value)  Hyperbolic tangent function derivative.
  Parameters:
     value : float, value to process.
  Returns: float
 relu(value)  Rectified linear unit (RELU) function.
  Parameters:
     value : float, value to process.
  Returns: float
 relu_derivative(value)  RELU function derivative.
  Parameters:
     value : float, value to process.
  Returns: float
 leaky_relu(value)  Leaky RELU function.
  Parameters:
     value : float, value to process.
  Returns: float
 leaky_relu_derivative(value)  Leaky RELU function derivative.
  Parameters:
     value : float, value to process.
  Returns: float
 relu6(value)  RELU-6 function.
  Parameters:
     value : float, value to process.
  Returns: float
 softmax(value)  Softmax function.
  Parameters:
     value : float array, values to process.
  Returns: float
 softplus(value)  Softplus function.
  Parameters:
     value : float, value to process.
  Returns: float
 softsign(value)  Softsign function.
  Parameters:
     value : float, value to process.
  Returns: float
 elu(value, alpha)  Exponential Linear Unit (ELU) function.
  Parameters:
     value : float, value to process.
     alpha : float, default=1.0, predefined constant, controls the value to which an ELU saturates for negative net inputs. .
  Returns: float
 selu(value, alpha, scale)  Scaled Exponential Linear Unit (SELU) function.
  Parameters:
     value : float, value to process.
     alpha : float, default=1.67326324, predefined constant, controls the value to which an SELU saturates for negative net inputs. .
     scale : float, default=1.05070098, predefined constant.
  Returns: float
 exponential(value)  Pointer to math.exp() function.
  Parameters:
     value : float, value to process.
  Returns: float
 function(name, value, alpha, scale)  Activation function.
  Parameters:
     name : string, name of activation function.
     value : float, value to process.
     alpha : float, default=na, if required. 
     scale : float, default=na, if required. 
  Returns: float
 derivative(name, value, alpha, scale)  Derivative Activation function.
  Parameters:
     name : string, name of activation function.
     value : float, value to process.
     alpha : float, default=na, if required. 
     scale : float, default=na, if required. 
  Returns: float
MLLossFunctionsLibrary   "MLLossFunctions" 
Methods for Loss functions.
 mse(expects, predicts)  Mean Squared Error (MSE) " MSE = 1/N * sum ((y - y')^2) ".
  Parameters:
     expects : float array, expected values.
     predicts : float array, prediction values.
  Returns: float
 binary_cross_entropy(expects, predicts)  Binary Cross-Entropy Loss (log).
  Parameters:
     expects : float array, expected values.
     predicts : float array, prediction values.
  Returns: float
DivergenceLibrary   "Divergence" 
Calculates a divergence between 2 series
 bullish(_src, _low, depth)  Calculates bullish divergence
  Parameters:
     _src : Main series
     _low : Comparison series (`low` is used if no argument is supplied)
     depth : Fractal Depth (`2` is used if no argument is supplied)
  Returns: 2 boolean values for regular and hidden divergence
 bearish(_src, _high, depth)  Calculates bearish divergence
  Parameters:
     _src : Main series
     _high : Comparison series (`high` is used if no argument is supplied)
     depth : Fractal Depth (`2` is used if no argument is supplied)
  Returns: 2 boolean values for regular and hidden divergence
I created this library to plug and play divergences in any code. 
You can create a divergence indicator from any series you like. 
Fractals are used to pinpoint the edge of the series. The higher the depth, the slower the divergence updates get.
My  Plain Stochastic Divergence  uses the same calculation. Watch it in action.
 
FunctionPeakDetectionLibrary   "FunctionPeakDetection" 
Method used for peak detection, similar to MATLAB peakdet method
 function(sample_x, sample_y, delta)  Method for detecting peaks.
  Parameters:
     sample_x : float array, sample with indices.
     sample_y : float array, sample with data.
     delta : float, positive threshold value for detecting a peak.
  Returns: tuple with found max/min peak indices.
DailyDeviationLibrary   "DailyDeviation" 
Helps in determining the relative deviation from the open of the day compared to the high or low values.
 hlcDeltaArrays(daysPrior, maxDeviation, spec, res)  Retuns a set of arrays representing the daily deviation of price for a given number of days.
  Parameters:
     daysPrior : Number of days back to get the close from.
     maxDeviation : Maximum deviation before a value is considered an outlier. A value of 0 will not filter results.
     spec : session.regular (default), session.extended or other time spec.
     res : The resolution (default = '1440').
  Returns:   Where OH = Open vs High, OL = Open vs Low, and OC = Open vs Close
 fromOpen(daysPrior, maxDeviation, comparison, spec, res)  Retuns a value representing the deviation from the open (to the high or low) of the current day given number of days to measure from.
  Parameters:
     daysPrior : Number of days back to get the close from.
     maxDeviation : Maximum deviation before a value is considered an outlier. A value of 0 will not filter results.
     comparison : The value use in comparison to the current open for the day.
     spec : session.regular (default), session.extended or other time spec.
     res : The resolution (default = '1440').
VolatilityLibrary   "Volatility" 
Functions for determining if volatility (true range) is within or exceeds normal.
The "True Range" (ta.tr) is used for measuring volatility.
Values are normalized by the volume adjusted weighted moving average (VAWMA) to be more like percent moves than price.
 current(len)  Returns the current price adjusted volatitlity ratio.
  Parameters:
     len : Number of bars to get a volume adjusted weighted average price.
 normal(len, maxDeviation, level, gapDays, spec, res)  Returns the normal upper range of volatility. Compensates for overnight gaps within a regular session.
  Parameters:
     len : Number of bars to measure volatility.
     maxDeviation : The limit of volatility before considered an outlier.
     level : The amount of standard deviation after cleaning outliers to be considered within normal.
     gapDays : The number of days in the past to measure overnight gap volaility.
     spec : session.regular (default), session.extended or other time spec.
     res : The resolution (default = '1440').
 isNormal(len, maxDeviation, level, gapDays, spec, res)  Returns true if the volatility (true range) is within normal levels. Compensates for overnight gaps within a regular session.
  Parameters:
     len : Number of bars to measure volatility.
     maxDeviation : The limit of volatility before considered an outlier.
     level : The amount of standard deviation after cleaning outliers to be considered within normal.
     gapDays : The number of days in the past to measure overnight gap volaility.
     spec : session.regular (default), session.extended or other time spec.
     res : The resolution (default = '1440').
 severity(len, maxDeviation, level, gapDays, spec, res)  Returns ratio of the current value to the normal value. Compensates for overnight gaps within a regular session.
  Parameters:
     len : Number of bars to measure volatility.
     maxDeviation : The limit of volatility before considered an outlier.
     level : The amount of standard deviation after cleaning outliers to be considered within normal.
     gapDays : The number of days in the past to measure overnight gap volaility.
     spec : session.regular (default), session.extended or other time spec.
     res : The resolution (default = '1440').
DataCleanerLibrary   "DataCleaner" 
Functions for acquiring outlier levels and acquiring a cleaned version of a series.
 outlierLevel(src, len, level)  Gets the (standard deviation) outlier level for a given series.
  Parameters:
     src : The series to average and add a multiple of the standard deviation to.
     len : The The number of bars to measure. 
     level : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary. 
  Returns: The average of the series plus the multiple of the standard deviation.
 cleanUsing(src, result, len, maxDeviation)  Returns an array representing the result series with (outliers provided by the source) removed.
  Parameters:
     src : The source series to read from.
     result : The result series.
     len : The maximum size of the resultant array.
     maxDeviation : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary. 
  Returns: An array containing the cleaned series.
 clean(src, len, maxDeviation)  Returns an array representing the source series with outliers removed.
  Parameters:
     src : The source series to read from.
     len : The maximum size of the resultant array.
     maxDeviation : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary. 
  Returns: An array containing the cleaned series.
 outlierLevelAdjusted(src, level, len, maxDeviation)  Gets the (standard deviation) outlier level for a given series after a single pass of removing any outliers.
  Parameters:
     src : The series to average and add a multiple of the standard deviation to.
     level : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary. 
     len : The The number of bars to measure. 
     maxDeviation : The optional standard deviation level to use when cleaning the series.  The default is the value of the provided level. 
  Returns: The average of the series plus the multiple of the standard deviation.
benchLibrary   "bench" 
A simple banchmark library to analyse script performance and bottlenecks.
Very useful if you are developing an overly complex application in Pine Script, or trying to optimise a library / function / algorithm...
 
 Supports artificial looping benchmarks (of fast functions)
 Supports integrated linear benchmarks (of expensive scripts)
 
One important thing to note is that the Pine Script compiler will completely ignore any calculations that do not eventually produce chart output. Therefore, if you are performing an artificial benchmark you will need to use the bench.reference(value) function to ensure the calculations are executed.
Please check the examples towards the bottom of the script.
 Quick Reference 
(Be warned this uses non-standard space characters to get the line indentation to work in the description!)
```
// Looping benchmark style
benchmark = bench.new(samples = 500, loops = 5000)
data = array.new_int()
if bench.start(benchmark)
  while bench.loop(benchmark)
    array.unshift(data, timenow)
  bench.mark(benchmark)
  while bench.loop(benchmark)
    array.unshift(data, timenow)
  bench.mark(benchmark)
  while bench.loop(benchmark)
    array.unshift(data, timenow)
  bench.stop(benchmark)
  bench.reference(array.get(data, 0))
bench.report(benchmark, '1x array.unshift()')
// Linear benchmark style
benchmark = bench.new()
data = array.new_int()
bench.start(benchmark)
for i = 0 to 1000
  array.unshift(data, timenow)
bench.mark(benchmark)
for i = 0 to 1000
  array.unshift(data, timenow)
bench.stop(benchmark)
bench.reference(array.get(data, 0))
bench.report(benchmark,'1000x array.unshift()')
```
 Detailed Interface 
 new(samples, loops)  Initialises a new benchmark array
  Parameters:
     samples : int, the number of bars in which to collect samples
     loops : int, the number of loops to execute within each sample
  Returns: int , the benchmark array
 active(benchmark)  Determing if the benchmarks state is active
  Parameters:
     benchmark : int , the benchmark array
  Returns: bool, true only if the state is active
 start(benchmark)  Start recording a benchmark from this point
  Parameters:
     benchmark : int , the benchmark array
  Returns: bool, true only if the benchmark is unfinished
 loop(benchmark)  Returns true until call count exceeds bench.new(loop) variable
  Parameters:
     benchmark : int , the benchmark array
  Returns: bool, true while looping
 reference(number, string)  Add a compiler reference to the chart so the calculations don't get optimised away
  Parameters:
     number : float, a numeric value to reference
     string : string, a string value to reference
 mark(benchmark, number, string)  Marks the end of one recorded interval and the start of the next
  Parameters:
     benchmark : int , the benchmark array
     number : float, a numeric value to reference
     string : string, a string value to reference
 stop(benchmark, number, string)  Stop the benchmark, ending the final interval
  Parameters:
     benchmark : int , the benchmark array
     number : float, a numeric value to reference
     string : string, a string value to reference
 report(Prints, benchmark, title, text_size, position)  
  Parameters:
     Prints : the benchmarks results to the screen
     benchmark : int , the benchmark array
     title : string, add a custom title to the report
     text_size : string, the text size of the log console (global size vars)
     position : string, the position of the log console (global position vars)
 unittest_bench(case)  Cache module unit tests, for inclusion in parent script test suite. Usage: bench.unittest_bench(__ASSERTS)
  Parameters:
     case : string , the current test case and array of previous unit tests (__ASSERTS)
 unittest(verbose)  Run the bench module unit tests as a stand alone. Usage: bench.unittest()
  Parameters:
     verbose : bool, optionally disable the full report to only display failures
HurstExponentLibrary   "HurstExponent" 
Library to calculate Hurst Exponent refactored from  Hurst Exponent - Detrended Fluctuation Analysis  
 demean(src)  Calculates a series subtracted from the series mean.
  Parameters:
     src : The series used to calculate the difference from the mean (e.g. log returns).
  Returns: The series subtracted from the series mean
 cumsum(src, length)  Calculates a cumulated sum from the series.
  Parameters:
     src : The series used to calculate the cumulative sum (e.g. demeaned log returns).
     length : The length used to calculate the cumulative sum (e.g. 100).
  Returns: The cumulative sum of the series as an array
 aproximateLogScale(scale, length)  Calculates an aproximated log scale. Used to save sample size
  Parameters:
     scale : The scale to aproximate.
     length : The length used to aproximate the expected scale.
  Returns: The aproximated log scale of the value
 rootMeanSum(cumulativeSum, barId, numberOfSegments)  Calculates linear trend to determine error between linear trend and cumulative sum
  Parameters:
     cumulativeSum : The cumulative sum array to regress.
     barId : The barId for the slice
     numberOfSegments : The total number of segments used for the regression calculation
  Returns: The error between linear trend and cumulative sum
 averageRootMeanSum(cumulativeSum, barId, length)  Calculates the Root Mean Sum Measured for each block (e.g the aproximated log scale)
  Parameters:
     cumulativeSum : The cumulative sum array to regress and determine the average of.
     barId : The barId for the slice
     length : The length used for finding the average
  Returns: The average root mean sum error of the cumulativeSum
 criticalValues(length)  Calculates the critical values for a hurst exponent for a given length
  Parameters:
     length : The length used for finding the average
  Returns: The critical value, upper critical value and lower critical value for a hurst exponent
 slope(cumulativeSum, length)  Calculates the hurst exponent slope measured from root mean sum, scaled to log log plot using linear regression
  Parameters:
     cumulativeSum : The cumulative sum array to regress and determine the average of.
     length : The length used for the hurst exponent sample size
  Returns: The slope of the hurst exponent
 smooth(src, length)  Smooths input using advanced linear regression
  Parameters:
     src : The series to smooth (e.g. hurst exponent slope)
     length : The length used to smooth
  Returns: The src smoothed according to the given length
 exponent(src, hurstLength)  Wrapper function to calculate the hurst exponent slope
  Parameters:
     src : The series used for returns calculation (e.g. close)
     hurstLength : The length used to calculate the hurst exponent (should be greater than 50)
  Returns: The src smoothed according to the given length
MomentsLibrary   "Moments" 
Based on  Moments (Mean,Variance,Skewness,Kurtosis)  . Rewritten for Pinescript v5.
 logReturns(src)  Calculates log returns of a series (e.g log percentage change)
  Parameters:
     src : Source to use for the returns calculation (e.g. close).
  Returns: Log percentage returns of a series
 mean(src, length)  Calculates the mean of a series using ta.sma
  Parameters:
     src : Source to use for the mean calculation (e.g. close).
     length : Length to use mean calculation (e.g. 14).
  Returns: The sma of the source over the length provided.
 variance(src, length)  Calculates the variance of a series
  Parameters:
     src : Source to use for the variance calculation (e.g. close).
     length : Length to use for the variance calculation (e.g. 14).
  Returns: The variance of the source over the length provided.
 standardDeviation(src, length)  Calculates the standard deviation of a series
  Parameters:
     src : Source to use for the standard deviation calculation (e.g. close).
     length : Length to use for the standard deviation calculation (e.g. 14).
  Returns: The standard deviation of the source over the length provided.
 skewness(src, length)  Calculates the skewness of a series
  Parameters:
     src : Source to use for the skewness calculation (e.g. close).
     length : Length to use for the skewness calculation (e.g. 14).
  Returns: The skewness of the source over the length provided.
 kurtosis(src, length)  Calculates the kurtosis of a series
  Parameters:
     src : Source to use for the kurtosis calculation (e.g. close).
     length : Length to use for the kurtosis calculation (e.g. 14).
  Returns: The kurtosis of the source over the length provided.
 skewnessStandardError(sampleSize)  Estimates the standard error of skewness based on sample size
  Parameters:
     sampleSize : The number of samples used for calculating standard error.
  Returns: The standard error estimate for skewness based on the sample size provided.
 kurtosisStandardError(sampleSize)  Estimates the standard error of kurtosis based on sample size
  Parameters:
     sampleSize : The number of samples used for calculating standard error.
  Returns: The standard error estimate for kurtosis based on the sample size provided.
 skewnessCriticalValue(sampleSize)  Estimates the critical value of skewness based on sample size
  Parameters:
     sampleSize : The number of samples used for calculating critical value.
  Returns: The critical value estimate for skewness based on the sample size provided.
 kurtosisCriticalValue(sampleSize)  Estimates the critical value of kurtosis based on sample size
  Parameters:
     sampleSize : The number of samples used for calculating critical value.
  Returns: The critical value estimate for kurtosis based on the sample size provided.
pNRTRLibrary   "pNRTR" 
Provides functions for calculating Nick Rypock Trailing Reverse (NRTR) trend values with higher precision offsets for both low, and high points rather than the standard single offset.
 pnrtr(float low_offset = 0.2, float high_offset = 0.2, float value = close)  
 low_offset 
Offset used for nrtr low_point calculations. Default is 0.2.
 high_offset 
Offset used for nrtr high_point calculations. Default is 0.2.
 value 
Variable used for nrtr point calculations. Default is close.
cacheLibrary   "cache" 
A simple cache library to store key value pairs.
 
  Fed up of injecting and returning so many values all the time?
  Want to separate your code and keep it clean?
  Need to make an expensive calculation and use the results in numerous places?
  Want to throttle calculations or persist random values across bars or ticks?
 
Then you've come to the right place. Or not! Up to you, I don't mind either way... ;)
Check the helpers and unit tests in the script for further detail.
 Detailed Interface 
 init(persistant)  Initialises the syncronised cache key and value arrays
  Parameters:
     persistant : bool, toggles data persistance between bars and ticks
  Returns:  [string , float ], a tuple of both arrays
 set(keys, values, key, value)  Sets a value into the cache
  Parameters:
     keys : string , the array of cache keys
     values : float , the array of cache values
     key : string, the cache key to create or update
     value : float, the value to set
 has(keys, values, key)  Checks if the cache has a key
  Parameters:
     keys : string , the array of cache keys
     values : float , the array of cache values
     key : string, the cache key to check
  Returns: bool, true only if the key is found
 get(keys, values, key)  Gets a keys value from the cache
  Parameters:
     keys : string , the array of cache keys
     values : float , the array of cache values
     key : string, the cache key to get
  Returns: float, the stored value
 remove(keys, values, key)  Removes a key and value from the cache
  Parameters:
     keys : string , the array of cache keys
     values : float , the array of cache values
     key : string, the cache key to remove
 count()  Counts how many key value pairs in the cache
  Returns: int, the total number of pairs
 loop(keys, values)  Returns true for each value in the cache (use as the while loop expression)
  Parameters:
     keys : string , the array of cache keys
     values : float , the array of cache values
 next(keys, values)  Returns each key value pair on successive calls (use in the while loop)
  Parameters:
     keys : string , the array of cache keys
     values : float , the array of cache values
  Returns:  , tuple of each key value pair
 clear(keys, values)  Clears all key value pairs from the cache
  Parameters:
     keys : string , the array of cache keys
     values : float , the array of cache values
 unittest_cache(case)  Cache module unit tests, for inclusion in parent script test suite. Usage: log.unittest_cache(__ASSERTS)
  Parameters:
     case : string , the current test case and array of previous unit tests (__ASSERTS)
 unittest(verbose)  Run the cache module unit tests as a stand alone. Usage: cache.unittest()
  Parameters:
     verbose : bool, optionally disable the full report to only display failures
FFTLibraryLibrary   "FFTLibrary"  contains a function for performing Fast Fourier Transform (FFT) along with a few helper functions. In general, FFT is defined for complex inputs and outputs. The real and imaginary parts of formally complex data are treated as separate arrays (denoted as x and y). For real-valued data, the array of imaginary parts should be filled with zeros.
 FFT function 
 fft(x, y, dir)  : Computes the one-dimensional discrete Fourier transform using an  in-place complex-to-complex FFT algorithm . Note: The transform also produces a mirror copy of the frequency components, which correspond to the signal's negative frequencies. 
  Parameters:
     x : float array, real part of the data,  array size must be a power of 2 
     y : float array, imaginary part of the data, array size must be the same as  x ; for real-valued input,  y  must be an array of zeros
     dir :  string, options =  ,  defines the direction of the transform: forward" (time-to-frequency) or inverse (frequency-to-time)
  Returns:  x, y : tuple (float array, float array), real and imaginary parts of the transformed data (original x and y are changed on output)
 Helper functions 
 fftPower(x, y)  : Helper function that computes the power of each frequency component (in other words, Fourier amplitudes squared).
  Parameters:
     x : float array, real part of the Fourier amplitudes
     y : float array, imaginary part of the Fourier amplitudes
  Returns:  power : float array of the same length as  x  and  y , Fourier amplitudes squared
 fftFreq(N)  : Helper function that returns the FFT sample frequencies defined in cycles per timeframe unit. For example, if the timeframe is 5m, the frequencies are in cycles/(5 minutes).
  Parameters:
     N : int, window length (number of points in the transformed dataset)
  Returns:  freq  : float array of N, contains the sample frequencies (with zero at the start).
FunctionProbabilityDistributionSamplingLibrary   "FunctionProbabilityDistributionSampling" 
Methods for probability distribution sampling selection.
 sample(probabilities)  Computes  a random selected index from a probability distribution.
  Parameters:
     probabilities : float array, probabilities of sample.
  Returns: int.
FunctionElementsInArrayLibrary   "FunctionElementsInArray" 
Methods to count number of elements in arrays
 count_float(sample, value)  Counts the number of elements equal to provided value in array.
  Parameters:
     sample : float array, sample data to process.
     value : float value to check for equality.
  Returns: int.
 count_int(sample, value)  Counts the number of elements equal to provided value in array.
  Parameters:
     sample : int array, sample data to process.
     value : int value to check for equality.
  Returns: int.
 count_string(sample, value)  Counts the number of elements equal to provided value in array.
  Parameters:
     sample : string array, sample data to process.
     value : string value to check for equality.
  Returns: int.
 count_bool(sample, value)  Counts the number of elements equal to provided value in array.
  Parameters:
     sample : bool array, sample data to process.
     value : bool value to check for equality.
  Returns: int.
 count_color(sample, value)  Counts the number of elements equal to provided value in array.
  Parameters:
     sample : color array, sample data to process.
     value : color value to check for equality.
  Returns: int.
 extract_indices_float(sample, value)  Counts the number of elements equal to provided value in array, and returns its indices.
  Parameters:
     sample : float array, sample data to process.
     value : float value to check for equality.
  Returns: int.
 extract_indices_int(sample, value)  Counts the number of elements equal to provided value in array, and returns its indices.
  Parameters:
     sample : int array, sample data to process.
     value : int value to check for equality.
  Returns: int.
 extract_indices_string(sample, value)  Counts the number of elements equal to provided value in array, and returns its indices.
  Parameters:
     sample : string array, sample data to process.
     value : string value to check for equality.
  Returns: int.
 extract_indices_bool(sample, value)  Counts the number of elements equal to provided value in array, and returns its indices.
  Parameters:
     sample : bool array, sample data to process.
     value : bool value to check for equality.
  Returns: int.
 extract_indices_color(sample, value)  Counts the number of elements equal to provided value in array, and returns its indices.
  Parameters:
     sample : color array, sample data to process.
     value : color value to check for equality.
  Returns: int.
LinearRegressionLibraryLibrary   "LinearRegressionLibrary"  contains functions for fitting a regression line to the time series by means of different models, as well as functions for estimating the accuracy of the fit.
 Linear regression algorithms: 
 RepeatedMedian(y, n, lastBar)  applies  repeated median regression  (robust linear regression algorithm) to the input time series within the selected interval.
 Parameters: 
 
 y :: float series, source time series (e.g. close)
 n :: integer, the length of the selected time interval
 lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
 
 Output: 
 
 mSlope :: float, slope of the regression line
 mInter  :: float, intercept of the regression line
 
 TheilSen(y, n, lastBar)  applies the  Theil-Sen estimator  (robust linear regression algorithm) to the input time series within the selected interval.
 Parameters: 
 
 y :: float series, source time series 
 n :: integer, the length of the selected time interval
 lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
 
 Output: 
 
 tsSlope :: float, slope of the regression line
 tsInter  :: float, intercept of the regression line
 
 OrdinaryLeastSquares(y, n, lastBar)  applies the  ordinary least squares  regression (non-robust) to the input time series within the selected interval.
 Parameters: 
 
 y :: float series, source time series 
 n :: integer, the length of the selected time interval
 lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
 
 Output: 
 
 olsSlope :: float, slope of the regression line
 olsInter  :: float, intercept of the regression line
 
 Model performance metrics: 
 metricRMSE(y, n, lastBar, slope, intercept)  returns the  Root-Mean-Square Error (RMSE)  of the regression. The better the model, the lower the RMSE.
 Parameters: 
 
 y :: float series, source time series (e.g. close)
 n :: integer, the length of the selected time interval
 lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
 slope :: float, slope of the evaluated linear regression line
 intercept :: float, intercept of the evaluated linear regression line
 
 Output: 
 
 rmse :: float, RMSE value
 
 metricMAE(y, n, lastBar, slope, intercept)  returns the  Mean Absolute Error (MAE)  of the regression. MAE is is similar to RMSE but is less sensitive to outliers. The better the model, the lower the MAE.
 Parameters: 
 
 y :: float series, source time series
 n :: integer, the length of the selected time interval
 lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
 slope :: float, slope of the evaluated linear regression line
 intercept :: float, intercept of the evaluated linear regression line
 
 Output: 
 
 mae :: float, MAE value
 
 metricR2(y, n, lastBar, slope, intercept)  returns the  coefficient of determination (R squared)  of the regression. The better the linear regression fits the data (compared to the sample mean), the closer the value of the R squared is to 1.
 Parameters: 
 
 y :: float series, source time series
 n :: integer, the length of the selected time interval
 lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
 slope :: float, slope of the evaluated linear regression line
 intercept :: float, intercept of the evaluated linear regression line
 
 Output: 
 
  Rsq :: float, R-sqared score
 
 Usage example:
 
//@version=5
indicator('ExampleLinReg', overlay=true)
// import the library
import tbiktag/LinearRegressionLibrary/1 as linreg
// define the studied interval: last 100 bars
int   Npoints  = 100
int   lastBar   = bar_index
int   firstBar  = bar_index - Npoints
// apply repeated median regression to the closing price time series within the specified interval 
{square bracket}slope, intercept{square bracket} = linreg.RepeatedMedian(close, Npoints, lastBar)
// calculate the root-mean-square error of the obtained linear fit
rmse = linreg.metricRMSE(close, Npoints, lastBar, slope, intercept)
// plot the line and print the RMSE value
float y1   = intercept
float y2   = intercept + slope * (Npoints - 1)
if barstate.islast
{indent} line.new(firstBar,y1, lastBar,y2)
{indent} label.new(lastBar,y2,text='RMSE = '+str.format("{0,number,#.#}", rmse))
FunctionCompoundInterestLibrary   "FunctionCompoundInterest" 
Method for compound interest.
 simple_compound(principal, rate, duration)  Computes compound interest for given duration.
  Parameters:
     principal : float, the principal or starting value.
     rate : float, the rate of interest.
     duration : float, the period of growth.
  Returns: float.
 variable_compound(principal, rates, duration)  Computes variable compound interest for given duration.
  Parameters:
     principal : float, the principal or starting value.
     rates : float array, the rates of interest.
     duration : int, the period of growth.
  Returns: float array.
 simple_compound_array(principal, rates, duration)  Computes variable compound interest for given duration.
  Parameters:
     principal : float, the principal or starting value.
     rates : float array, the rates of interest.
     duration : int, the period of growth.
  Returns: float array.
 variable_compound_array(principal, rates, duration)  Computes variable compound interest for given duration.
  Parameters:
     principal : float, the principal or starting value.
     rates : float array, the rates of interest.
     duration : int, the period of growth.
  Returns: float array.
LibraryPrivateUsage001This is a public library that include the functions explained below. The libraries are considered public domain code and permission is not required from the author if you reuse these functions in your open-source scripts






















