AdxCalcHourlyLibrary   "AdxCalcHourly" 
 getBars() 
  getBars: Returns the number of bars to use in the historical lookback period
  Returns: simple int
 directionDown() 
  directionDown: Calculates the direction down for bar_index
  Returns: series float
 directionUp() 
  directionUp: Calculates the direction up for bar_index
  Returns: series float
 trueRangeMovingAverage() 
  trueRangeMovingAverage: Calculates the true range moving average over the historical lookback period
  Returns: series float
 positiveDirectionalMovement() 
  positiveDirectionalMovement: Calculates the positive direction movement for bar_index
  Returns: series float
 negativeDirectionalMovement() 
  negativeDirectionalMovement: Calculates the begative direction movement for bar_index
  Returns: series float
 totalDirectionDown() 
  totalDirectionDown: Calculates the total direction down for the historical lookback period
  Returns: series float
 totalDirectionUp() 
  totalDirectionUp: Calculates the total direction up for the historical lookback period
  Returns: series float
 totalDirection() 
  totalDirection: Calculates the total direction movement for the historical lookback period
  Returns: series float
 averageDirectionalIndex() 
  averageDirectionalIndex: Calculates the average directional index (ADX) based on the trend for the historical lookback period
  Returns: series float
 getAdxHistoricalAverage() 
  getAdxHistoricalAverage: Calculates the average directional index (ADX) for the historical lookback period
  Returns: series float
 getAdxHistoricalHigh() 
  getAdxHistoricalHigh: Calculates the historical high of the directional index (ADX) for the historical lookback period
  Returns: series float
 getAdxHistoricalLow() 
  getAdxHistoricalLow: Calculates the historical low of the directional index (ADX) for the historical lookback period
  Returns: series float
 getAdxOpinion() 
  getAdxOpinion: Calculatesa recomendation for the directional index (ADX) based on the historical lookback period
  Returns: series float
Statistics
WpProbabilisticLibLibrary   "WpProbabilisticLib" 
Library that contains functions to calculate probabilistic based on historical candle analysis
 CandleType(open, close)  This function check what type of candle is, based on its close and open prices
  Parameters:
     open : series float (open price)
     close : series float (close price)
  Returns: This function return the candle type (1 for Bullish, -1 Bearish, 0 as Doji candle)
 CandleTypePercentDiff(open, close, qtd_candles_before, consider_dojis)  This function calculates the percentage difference between Bullish and Bearish in a candlestick range back in time and which is the type with the least occurrences
  Parameters:
     open : series float (open price series)
     close : series float (close price series)
     qtd_candles_before : simple int (Number of candles before to calculate)
     consider_dojis : simple string (How to consider dojis (no consider "NO", as bearish "AS_RED", as bullish "AS_GREEN"))
  Returns: tuple(float, int) (Returns the percentage difference between Bullish and Bearish candles and which type of candle has the least occurrences)
PivotThis library was designed to create three different datasets using Bill Williams fractals. The goal is to spot trends in reversal data and ultimately use these datasets to help predict future price reversals. 
First, the  pivot()  function is used to initialize and populate three separate arrays (high pivot , low pivot , all pivots ). Since each high/low price depends on the bar_index, the bar_index, pivot direction(high/low), and high/low values are compressed into a string to maintain the data's integrity ("__"). Once each string array is populated and organized by bar_index, all three are returned inside a tuple.  The return value must be deconstructed   H,L,A =pivot()  for each array's values to be accessed using  getPivot() . This boilerplate allows for data to be accessed more efficiently in a recursive environment. getPivot() was designed to be used inside of a for or while block to populate matrices for further analyses. Again, getPivot() return values must be exposed through deconstruction.  x,d,y =getPivot().  See code for more details.
 pivot(int XLR)  initializes and populates arrays
 Parameters 
 
 XLR  - number of bars to the left and right that must be lower for a high to be considered a pivotHigh, or vice versa.  This number will drastically change the size and scope of the returned datasets.  smaller values will produce much larger datasets, which might model short term price activity well. In contrast, larger values will produce smaller datasets which might model longer term price activity well.
 
 Returns  - tuple [string ]
 getPivot(string  arrayID, int index)  accesses array data
 Parameters 
 
 arrayID  - the variable name for one of the three arrays returned by pivot().
 index  - the index of the provided array, with 0 being the most recent pivot point. can be set to " i " in a loop to access values recursively
  
 Returns  - tuple
AutoFiboRetraceLibrary   "AutoFiboRetrace" 
TODO: add library description here
 fun(x)  TODO: add function description here
  Parameters:
     x : TODO: add parameter x description here
  Returns: TODO: add what function returns
MonthlyReturnsVsMarketLibrary  "MonthlyReturnsVsMarket"  is a repackaging of the script  here 
Credits to @QuantNomad for orginal script
Now you can avoid to pollute your own strategy's code with the monthly returns table code and just import the library and call  displayMonthlyPnL(int precision)   function
To be used in strategy scripts.
CanvasLibrary   "Canvas" 
A library implementing a kind of "canvas" using a table where each pixel is represented by a table cell and the pixel color by the background color of each cell.
To use the library, you need to create a color matrix (represented as an array) and a canvas table.
The canvas table is the container of the canvas, and the color matrix determines what color each pixel in the canvas should have.
 max_canvas_size()  Function that returns the maximum size of the canvas (100). The canvas is always square, so the size is equal to rows (as opposed to not rows multiplied by columns).
  Returns: The maximum size of the canvas (100).
 get_bg_color(color_matrix)  Get the current background color of the color matrix. This is the default color used when erasing pixels or clearing a canvas.
  Parameters:
     color_matrix : The color matrix.
  Returns: The current background color.
 get_fg_color(color_matrix)  Get the current foreground color of the color matrix. This is the default color used when drawing pixels.
  Parameters:
     color_matrix : The color matrix.
  Returns: The current foreground color.
 set_bg_color(color_matrix, bg_color)  Set the background color of the color matrix. This is the default color used when erasing pixels or clearing a canvas.
  Parameters:
     color_matrix : The color matrix.
     bg_color : The new background color.
 set_fg_color(color_matrix, fg_color)  Set the foreground color of the color matrix. This is the default color used when drawing pixels.
  Parameters:
     color_matrix : The color matrix.
     fg_color : The new foreground color.
 color_matrix_rows(color_matrix, rows)  Function that returns how many rows a color matrix consists of.
  Parameters:
     color_matrix : The color matrix.
     rows : (Optional) The number of rows of the color matrix. This can be omitted, but if used, can speed up execution.
  Returns: The number of rows a color matrix consists of.
 pixel_color(color_matrix, x, y, rows)  Get the color of the pixel at the specified coordinates.
  Parameters:
     color_matrix : The color matrix.
     x : The X coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     y : The Y coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     rows : (Optional) The number of rows of the color matrix. This can be omitted, but if used, can speed up execution.
  Returns: The color of the pixel at the specified coordinates.
 draw_pixel(color_matrix, x, y, pixel_color, rows)  Draw a pixel at the specified X and Y coordinates. Uses the specified color.
  Parameters:
     color_matrix : The color matrix.
     x : The X coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     y : The Y coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     pixel_color : The color of the pixel.
     rows : (Optional) The number of rows of the color matrix. This can be omitted, but if used, can speed up execution.
 draw_pixel(color_matrix, x, y, rows)  Draw a pixel at the specified X and Y coordinates. Uses the current foreground color.
  Parameters:
     color_matrix : The color matrix.
     x : The X coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     y : The Y coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     rows : (Optional) The number of rows of the color matrix. This can be omitted, but if used, can speed up execution.
 erase_pixel(color_matrix, x, y, rows)  Erase a pixel at the specified X and Y coordinates, replacing it with the background color.
  Parameters:
     color_matrix : The color matrix.
     x : The X coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     y : The Y coordinate for the pixel. Must be between 0 and "color_matrix_rows() - 1".
     rows : (Optional) The number of rows of the color matrix. This can be omitted, but if used, can speed up execution.
 init_color_matrix(rows, bg_color, fg_color)  Create and initialize a color matrix with the specified number of rows. The number of columns will be equal to the number of rows.
  Parameters:
     rows : The number of rows the color matrix should consist of. This can be omitted, but if used, can speed up execution. It can never be greater than "max_canvas_size()".
     bg_color : (Optional) The initial background color. The default is black.
     fg_color : (Optional) The initial foreground color. The default is white.
  Returns: The array representing the color matrix.
 init_canvas(color_matrix, pixel_width, pixel_height, position)  Create and initialize a canvas table.
  Parameters:
     color_matrix : The color matrix.  
     pixel_width : (Optional) The pixel width (in % of the pane width). The default width is 0.35%.
     pixel_height : (Optional) The pixel width (in % of the pane height). The default width is 0.60%.
     position : (Optional) The position for the table representing the canvas. The default is "position.middle_center".
  Returns: The canvas table.
 clear(color_matrix, rows)  Clear a color matrix, replacing all pixels with the current background color.
  Parameters:
     color_matrix : The color matrix.
     rows : The number of rows of the color matrix. This can be omitted, but if used, can speed up execution.
 update(canvas, color_matrix, rows)  This updates the canvas with the colors from the color matrix. No changes to the canvas gets plotted until this function is called.
  Parameters:
     canvas : The canvas table.
     color_matrix : The color matrix.
     rows : The number of rows of the color matrix. This can be omitted, but if used, can speed up execution.
OrdinaryLeastSquaresLibrary   "OrdinaryLeastSquares" 
One of the most common ways to estimate the coefficients for a linear regression is to use the Ordinary Least Squares (OLS) method.
This library implements OLS in pine. This implementation can be used to fit a linear regression of multiple independent variables onto one dependent variable,
as long as the assumptions behind OLS hold.
 solve_xtx_inv(x, y)  Solve a linear system of equations using the Ordinary Least Squares method.
This function returns both the estimated OLS solution and a matrix that essentially measures the model stability (linear dependence between the columns of 'x').
NOTE: The latter is an intermediate step when estimating the OLS solution but is useful when calculating the covariance matrix and is returned here to save computation time
so that this step doesn't have to be calculated again when things like standard errors should be calculated.
  Parameters:
     x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
     y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
  Returns: Returns both the estimated OLS solution and a matrix that essentially measures the model stability (xtx_inv is equal to (X'X)^-1).
 solve(x, y)  Solve a linear system of equations using the Ordinary Least Squares method.
  Parameters:
     x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
     y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
  Returns: Returns the estimated OLS solution.
 standard_errors(x, y, beta_hat, xtx_inv)  Calculate the standard errors.
  Parameters:
     x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
     y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
     beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
     xtx_inv : This is (X'X)^-1, which means we take the transpose of the X matrix, multiply that the X matrix and then take the inverse of the result.
This essentially measures the linear dependence between the columns of the X matrix.
  Returns: The standard errors.
 estimate(x, beta_hat)  Estimate the next step of a linear model.
  Parameters:
     x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
     beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
  Returns: Returns the new estimate of Y based on the linear model.
FunctionPolynomialFitLibrary   "FunctionPolynomialFit" 
Performs Polynomial Regression fit to data.
In statistics, polynomial regression is a form of regression analysis in which 
the relationship between the independent variable x and the dependent variable 
y is modelled as an nth degree polynomial in x. 
reference: 
en.wikipedia.org
www.bragitoff.com
 gauss_elimination(A, m, n)  Perform Gauss-Elimination and returns the Upper triangular matrix and solution of equations.
  Parameters:
     A : float matrix, data samples.
     m : int, defval=na, number of rows.
     n : int, defval=na, number of columns.
  Returns: float array with coefficients.
 polyfit(X, Y, degree)  Fits a polynomial of a degree to (x, y) points.
  Parameters:
     X : float array, data sample x point.
     Y : float array, data sample y point.
     degree : int, defval=2, degree of the polynomial.
  Returns: float array with coefficients.
note:
p(x) = p  * x**deg + ... + p 
 interpolate(coeffs, x)  interpolate the y position at the provided x.
  Parameters:
     coeffs : float array, coefficients of the polynomial.
     x : float, position x to estimate y.
  Returns: float.
divergenceLibrary   "divergence" 
divergence: divergence algorithm with top and bottom kline tolerance
 regular_bull(series, series, simple, simple, simple, simple, simple)  regular_bull: regular bull divergence, lower low src but higher low osc
  Parameters:
     series : float src: the source series
     series : float osc: the oscillator index
     simple : int lbL: look back left
     simple : int lbR: look back right
     simple : int rangeL: min look back range
     simple : int rangeU: max look back range
     simple : int tolerance: the number of tolerant klines
  Returns: array:  
 hidden_bull(series, series, simple, simple, simple, simple, simple)  hidden_bull: hidden bull divergence, higher low src but lower low osc
  Parameters:
     series : float src: the source series
     series : float osc: the oscillator index
     simple : int lbL: look back left
     simple : int lbR: look back right
     simple : int rangeL: min look back range
     simple : int rangeU: max look back range
     simple : int tolerance: the number of tolerant klines
  Returns: array:  
 regular_bear(series, series, simple, simple, simple, simple, simple)  regular_bear: regular bear divergence, higher high src but lower high osc
  Parameters:
     series : float src: the source series
     series : float osc: the oscillator index
     simple : int lbL: look back left
     simple : int lbR: look back right
     simple : int rangeL: min look back range
     simple : int rangeU: max look back range
     simple : int tolerance: the number of tolerant klines
  Returns: array:  
 hidden_bear(series, series, simple, simple, simple, simple, simple)  hidden_bear: hidden bear divergence, lower high src but higher high osc
  Parameters:
     series : float src: the source series
     series : float osc: the oscillator index
     simple : int lbL: look back left
     simple : int lbR: look back right
     simple : int rangeL: min look back range
     simple : int rangeU: max look back range
     simple : int tolerance: the number of tolerant klines
  Returns: array: 
least_squares_regressionLibrary   "least_squares_regression" 
least_squares_regression: Least squares regression algorithm to find the optimal price interval for a given time period
 basic_lsr(series, series, series)  basic_lsr: Basic least squares regression algorithm
  Parameters:
     series : int  t: time scale value array corresponding to price
     series : float  p: price scale value array corresponding to time
     series : int array_size: the length of regression array
  Returns: reg_slop, reg_intercept, reg_level, reg_stdev
 trend_line_lsr(series, series, series, string, series, series)  top_trend_line_lsr: Trend line fitting based on least square algorithm
  Parameters:
     series : int  t: time scale value array corresponding to price
     series : float  p: price scale value array corresponding to time
     series : int array_size: the length of regression array
     string : reg_type: regression type in 'top' and 'bottom'
     series : int max_iter: maximum fitting iterations
     series : int min_points: the threshold of regression point numbers
  Returns: reg_slop, reg_intercept, reg_level, reg_stdev, reg_point_num
simple_squares_regressionLibrary   "simple_squares_regression" 
simple_squares_regression: simple squares regression algorithm to find the optimal price interval for a given time period
 basic_ssr(series, series, series)  basic_ssr: Basic simple squares regression algorithm
  Parameters:
     series : float src: the regression source such as close
     series : int region_forward: number of candle lines at the right end of the regression region from the current candle line
     series : int region_len: the length of regression region
  Returns: left_loc, right_loc, reg_val, reg_std, reg_max_offset
 search_ssr(series, series, series, series)  search_ssr: simple squares regression region search algorithm
  Parameters:
     series : float src: the regression source such as close
     series : int max_forward: max number of candle lines at the right end of the regression region from the current candle line
     series : int region_lower: the lower length of regression region
     series : int region_upper: the upper length of regression region
  Returns: left_loc, right_loc, reg_val, reg_level, reg_std_err, reg_max_offset
on_balance_volumeLibrary   "on_balance_volume" 
on_balance_volume: custom on balance volume
 obv_diff(string, simple)  obv_diff: custom on balance volume diff version
  Parameters:
     string : type: the moving average type of on balance volume
     simple : int len: the moving average length of on balance volume
  Returns: obv_diff: custom on balance volume diff value
 obv_diff_norm(string, simple)  obv_diff_norm: custom normalized on balance volume diff version
  Parameters:
     string : type: the moving average type of on balance volume
     simple : int len: the moving average length of on balance volume
  Returns: obv_diff: custom normalized on balance volume diff value
moving_averageLibrary   "moving_average" 
moving_average: moving average variants
 variant(string, series, simple)  variant: moving average variants
  Parameters:
     string : type: type in  
     series : float src: the source series of moving average
     simple : int len: the length of moving average
  Returns: float: the moving average variant value
ConditionalAverages█   OVERVIEW 
This library is a Pine Script™ programmer’s tool containing functions that average values selectively.
█   CONCEPTS 
Averaging can be useful to smooth out unstable readings in the data set, provide a benchmark to see the underlying trend of the data, or to provide a general expectancy of values in establishing a central tendency. Conventional averaging techniques tend to apply indiscriminately to all values in a fixed window, but it can sometimes be useful to average values only when a specific condition is met. As conditional averaging works on specific elements of a dataset, it can help us derive more context-specific conclusions. This library offers a collection of averaging methods that not only accomplish these tasks, but also exploit the efficiencies of the Pine Script™ runtime by foregoing unnecessary and resource-intensive  for  loops.
█   NOTES 
 To Loop or Not to Loop 
Though  for  and  while  loops are essential programming tools, they are often unnecessary in Pine Script™. This is because the Pine Script™ runtime already runs your scripts in a loop where it executes your code on each bar of the dataset. Pine Script™ programmers who understand how their code executes on charts can use this to their advantage by designing loop-less code that will run orders of magnitude faster than functionally identical code using loops. Most of this library's function illustrate how you can achieve loop-less code to process past values. See the  User Manual page on loops  for more information. If you are looking for ways to measure execution time for you scripts, have a look at our  LibraryStopwatch library .
Our `avgForTimeWhen()` and `totalForTimeWhen()` are exceptions in the library, as they use a  while  structure. Only a few iterations of the loop are executed on each bar, however, as its only job is to remove the few elements in the array that are outside the moving window defined by a time boundary.
 Cumulating and Summing Conditionally 
The  ta.cum()  or  math.sum()  built-in functions can be used with ternaries that select only certain values. In our `avgWhen(src, cond)` function, for example, we use this technique to cumulate only the occurrences of `src` when `cond` is true:
 float cumTotal = ta.cum(cond ? src : 0) We then use:
 float cumCount = ta.cum(cond ? 1 : 0) to calculate the number of occurrences where `cond` is true, which corresponds to the quantity of values cumulated in `cumTotal`.
 Building Custom Series With Arrays 
The advent of arrays in Pine has enabled us to build our custom data series. Many of this library's functions use arrays for this purpose, saving newer values that come in when a condition is met, and discarding the older ones, implementing a  queue .
 `avgForTimeWhen()` and `totalForTimeWhen()` 
These two functions warrant a few explanations. They operate on a number of values included in a moving window defined by a timeframe expressed in milliseconds. We use a 1D timeframe in our example code. The number of bars included in the moving window is unknown to the programmer, who only specifies the period of time defining the moving window. You can thus use `avgForTimeWhen()` to calculate a rolling moving average for the last 24 hours, for example, that will work whether the chart is using a 1min or 1H timeframe. A 24-hour moving window will typically contain many more values on a 1min chart that on a 1H chart, but their calculated average will be very close.
Problems will arise on non-24x7 markets when large time gaps occur between chart bars, as will be the case across holidays or trading sessions. For example, if you were using a 24H timeframe and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The `minBars` parameter mitigates this by guaranteeing that a minimum number of bars are always included in the calculation, even if including those bars requires reaching outside the prescribed timeframe. We use a minimum value of 10 bars in the example code.
 Using  var  in Constant Declarations 
In the past, we have been using  var  when initializing so-called constants in our scripts, which as per the  Style Guide 's recommendations, we identify using UPPER_SNAKE_CASE. It turns out that  var  variables incur slightly superior maintenance overhead in the Pine Script™ runtime, when compared to variables initialized on each bar. We thus no longer use  var  to declare our "int/float/bool" constants, but still use it when an initialization on each bar would require too much time, such as when initializing a string or with a heavy function call.
 Look first. Then leap.  
█   FUNCTIONS 
 avgWhen(src, cond)  
  Gathers values of the source when a condition is true and averages them over the total number of occurrences of the condition.
  Parameters:
     src : (series int/float) The source of the values to be averaged. 
     cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
  Returns: (float) A cumulative average of values when a condition is met.
 avgWhenLast(src, cond, cnt)  
  Gathers values of the source when a condition is true and averages them over a defined number of occurrences of the condition.
  Parameters:
     src : (series int/float) The source of the values to be averaged. 
     cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
     cnt : (simple int) The quantity of last occurrences of the condition for which to average values.
  Returns: (float) The average of `src` for the last `x` occurrences where `cond` is true.
 avgWhenInLast(src, cond, cnt)  
  Gathers values of the source when a condition is true and averages them over the total number of occurrences during a defined number of bars back.
  Parameters:
     src : (series int/float) The source of the values to be averaged. 
     cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
     cnt : (simple int) The quantity of bars back to evaluate.
  Returns: (float) The average of `src` in last `cnt` bars, but only when `cond` is true.
 avgSince(src, cond)  
  Averages values of the source since a condition was true.
  Parameters:
     src : (series int/float) The source of the values to be averaged. 
     cond : (series bool) The condition determining when the average is reset.
  Returns: (float) The average of `src` since `cond` was true.
 avgForTimeWhen(src, ms, cond, minBars)  
  Averages values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
  Parameters:
     src : (series int/float) The source of the values to be averaged. 
     ms : (simple int) The time duration in milliseconds defining the size of the moving window.
     cond : (series bool) The condition determining which values are included. Optional.
     minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
  Returns: (float) The average of `src` when `cond` is true in the moving window.
 totalForTimeWhen(src, ms, cond, minBars)  
  Sums values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
  Parameters:
     src : (series int/float) The source of the values to be summed. 
     ms : (simple int) The time duration in milliseconds defining the size of the moving window.
     cond : (series bool) The condition determining which values are included. Optional.
     minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
  Returns: (float) The sum of `src` when `cond` is true in the moving window.
MathProbabilityDistributionLibrary   "MathProbabilityDistribution" 
Probability Distribution Functions.
 name(idx)  Indexed names helper function.
  Parameters:
     idx : int, position in the range (0, 6).
  Returns: string, distribution name.
usage:
.name(1)
Notes:
(0) => 'StdNormal'     
(1) => 'Normal'        
(2) => 'Skew Normal'   
(3) => 'Student T'     
(4) => 'Skew Student T'
(5) => 'GED'           
(6) => 'Skew GED'
 zscore(position, mean, deviation)  Z-score helper function for x calculation.
  Parameters:
     position : float, position.
     mean : float, mean.
     deviation : float, standard deviation.
  Returns: float, z-score.
usage:
.zscore(1.5, 2.0, 1.0)
 std_normal(position)  Standard Normal Distribution.
  Parameters:
     position : float, position.
  Returns: float, probability density.
usage:
.std_normal(0.6)
 normal(position, mean, scale)  Normal Distribution.
  Parameters:
     position : float, position in the distribution.
     mean : float, mean of the distribution, default=0.0 for standard distribution.
     scale : float, scale of the distribution, default=1.0 for standard distribution.
  Returns: float, probability density.
usage:
.normal(0.6)
 skew_normal(position, skew, mean, scale)  Skew Normal Distribution.
  Parameters:
     position : float, position in the distribution.
     skew : float, skewness of the distribution.
     mean : float, mean of the distribution, default=0.0 for standard distribution.
     scale : float, scale of the distribution, default=1.0 for standard distribution.
  Returns: float, probability density.
usage:
.skew_normal(0.8, -2.0)
 ged(position, shape, mean, scale)  Generalized Error Distribution.
  Parameters:
     position : float, position.
     shape : float, shape.
     mean : float, mean, default=0.0 for standard distribution.
     scale : float, scale, default=1.0 for standard distribution.
  Returns: float, probability.
usage:
.ged(0.8, -2.0)
 skew_ged(position, shape, skew, mean, scale)  Skew Generalized Error Distribution.
  Parameters:
     position : float, position.
     shape : float, shape.
     skew : float, skew.
     mean : float, mean, default=0.0 for standard distribution.
     scale : float, scale, default=1.0 for standard distribution.
  Returns: float, probability.
usage:
.skew_ged(0.8, 2.0, 1.0)
 student_t(position, shape, mean, scale)  Student-T Distribution.
  Parameters:
     position : float, position.
     shape : float, shape.
     mean : float, mean, default=0.0 for standard distribution.
     scale : float, scale, default=1.0 for standard distribution.
  Returns: float, probability.
usage:
.student_t(0.8, 2.0, 1.0)
 skew_student_t(position, shape, skew, mean, scale)  Skew Student-T  Distribution.
  Parameters:
     position : float, position.
     shape : float, shape.
     skew : float, skew.
     mean : float, mean, default=0.0 for standard distribution.
     scale : float, scale, default=1.0 for standard distribution.
  Returns: float, probability.
usage:
.skew_student_t(0.8, 2.0, 1.0)
 select(distribution, position, mean, scale, shape, skew, log)  Conditional Distribution.
  Parameters:
     distribution : string, distribution name.
     position : float, position.
     mean : float, mean, default=0.0 for standard distribution.
     scale : float, scale, default=1.0 for standard distribution.
     shape : float, shape.
     skew : float, skew.
     log : bool, if true apply log() to the result.
  Returns: float, probability.
usage:
.select('StdNormal', __CYCLE4F__, log=true)
MovingAveragesLibrary   "MovingAverages" 
Contains utilities for generating moving average values including getting a moving average by name and a function for generating a Volume-Adjusted WMA.
 sma(_D, _len)  Simple Moving Avereage
  Parameters:
     _D : The series to measure from.
     _len : The number of bars to measure with.
 ema(_D, _len)  Exponential Moving Avereage
  Parameters:
     _D : The series to measure from.
     _len : The number of bars to measure with.
 rma(_D, _len)  RSI Moving Avereage
  Parameters:
     _D : The series to measure from.
     _len : The number of bars to measure with.
 wma(_D, _len)  Weighted Moving Avereage
  Parameters:
     _D : The series to measure from.
     _len : The number of bars to measure with.
 vwma(_D, _len)  volume-weighted Moving Avereage
  Parameters:
     _D : The series to measure from.  Default is 'close'.
     _len : The number of bars to measure with.
 alma(_D, _len)  Arnaud Legoux Moving Avereage
  Parameters:
     _D : The series to measure from.  Default is 'close'.
     _len : The number of bars to measure with.
 cma(_D, _len, C, compound)  Coefficient Moving Avereage (CMA) is a variation of a moving average that can simulate SMA or WMA with the advantage of previous data.
  Parameters:
     _D : The series to measure from.  Default is 'close'.
     _len : The number of bars to measure with.
     C : The coefficient to use when averaging. 0 behaves like SMA, 1 behaves like WMA.
     compound : When true (default is false) will use a compounding method for weighting the average.
 dema(_D, _len)  Double Exponential Moving Avereage
  Parameters:
     _D : The series to measure from.  Default is 'close'.
     _len : The number of bars to measure with.
 zlsma(_D, _len)  Arnaud Legoux Moving Avereage
  Parameters:
     _D : The series to measure from.  Default is 'close'.
     _len : The number of bars to measure with.
 zlema(_D, _len)  Arnaud Legoux Moving Avereage
  Parameters:
     _D : The series to measure from.  Default is 'close'.
     _len : The number of bars to measure with.
 get(type, len, src)  Generates a moving average based upon a 'type'.
  Parameters:
     type : The type of moving average to generate.  Values allowed are: SMA, EMA, WMA, VWMA and VAWMA.
     len : The number of bars to measure with.
     src : The series to measure from.  Default is 'close'.
  Returns: The moving average series requested.
eStrategyLibrary   "eStrategy" 
Library contains methods which can help build custom strategy for continuous investment plans and also compare it with systematic buy and hold.
 sip(startYear, initialDeposit, depositFrequency, recurringDeposit, buyPrice)  Depicts systematic buy and hold over period of time
  Parameters:
     startYear : Year on which SIP is started
     initialDeposit : Initial one time investment at the start
     depositFrequency : Frequency of recurring deposit - can be monthly or weekly
     recurringDeposit : Recurring deposit amount
     buyPrice : Indicatinve buy price. Use high to be conservative. low, close, open, hl2, hlc3, ohlc4, hlcc4 are other options.
  Returns: totalInvestment - initial + recurring deposits
totalQty - Quantity of units held for given instrument
totalEquity - Present equity
 customStrategy(startYear, initialDeposit, depositFrequency, recurringDeposit, buyPrice, sellPrice, initialInvestmentPercent, recurringInvestmentPercent, signal, tradePercent)  Allows users to define custom strategy and enhance systematic buy and hold by adding take profit and reloads
  Parameters:
     startYear : Year on which SIP is started
     initialDeposit : Initial one time investment at the start
     depositFrequency : Frequency of recurring deposit - can be monthly or weekly
     recurringDeposit : Recurring deposit amount
     buyPrice : Indicatinve buy price. Use high to be conservative. low, close, open, hl2, hlc3, ohlc4, hlcc4 are other options.
     sellPrice : Indicatinve sell price. Use low to be conservative. high, close, open, hl2, hlc3, ohlc4, hlcc4 are other options.
     initialInvestmentPercent : percent of amount to invest from the initial depost. Keep rest of them as cash 
     recurringInvestmentPercent : percent of amount to invest from recurring deposit. Keep rest of them as cash
     signal : can be 1, -1 or 0. 1 means buy/reload. -1 means take profit and 0 means neither. 
     tradePercent : percent of amount to trade when signal is not 0. If taking profit, it will sell the percent from existing position. If reloading, it will buy with percent from cash reserve
  Returns: totalInvestment - initial + recurring deposits
totalQty - Quantity of units held for given instrument
totalCash = Amount of cash held
totalEquity - Overall equity = totalQty*close + totalCash
JohnEhlersFourierTransformLibrary   "JohnEhlersFourierTransform" 
Fourier Transform for Traders By John Ehlers, slightly modified to allow to inspect other than the 8-50 frequency spectrum.
reference:
www.mesasoftware.com
 high_pass_filter(source)  Detrended version of the data by High Pass Filtering with a 40 Period cutoff
  Parameters:
     source : float, data source.
  Returns: float.
 transformed_dft(source, start_frequency, end_frequency)  DFT by John Elhers.
  Parameters:
     source : float, data source.
     start_frequency : int, lower bound of the frequency window, must be a positive number >= 0, window must be less than or 30.
     end_frequency : int, upper bound of the frequency window, must be a positive number >= 0, window must be less than or 30.
  Returns: tuple with float, float array.
 db_to_rgb(db, transparency)  converts the frequency decibels to rgb.
  Parameters:
     db : float, decibels value.
     transparency : float, transparency value.
  Returns: color.
FunctionCosineSimilarityLibrary   "FunctionCosineSimilarity" 
Cosine Similarity method.
 function(sample_a, sample_b)  Measure the similarity of 2 vectors.
  Parameters:
     sample_a : float array, values.
     sample_b : float array, values.
  Returns: float.
 diss(cosim)  Dissimilarity helper function.
  Parameters:
     cosim : float, cosine similarity value (0 > 1)
  Returns: float
historicalrangeLibrary   "historicalrange" 
Library provices a method to calculate historical percentile range of series.
 hpercentrank(source)  calculates historical percentrank of the source
  Parameters:
     source : Source for which historical percentrank needs to be calculated. Source should be ranging between 0-100. If using a source which can beyond 0-100, use short term percentrank to baseline them.
  Returns: pArray - percentrank array which contains how many instances of source occurred at different levels.
upperPercentile - percentile based on higher value
lowerPercentile - percentile based on lower value
median - median value of the source
max - max value of the source
 distancefromath(source)  returns stats on historical distance from ath in terms of percentage
  Parameters:
     source : for which stats are calculated
  Returns: percentile and related historical stats regarding distance from ath
 distancefromma(maType, length, source)  returns stats on historical distance from moving average in terms of percentage
  Parameters:
     maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
     length : Moving Average Length
     source : for which stats are calculated
  Returns: percentile and related historical stats regarding distance from ath
 bpercentb(source, maType, length, multiplier, sticky)  returns percentrank and stats on historical bpercentb levels
  Parameters:
     source : Moving Average Source
     maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
     length : Moving Average Length
     multiplier : Standard Deviation multiplier
     sticky : - sticky boundaries which will only change when value is outside boundary.
  Returns: percentile and related historical stats regarding Bollinger Percent B
 kpercentk(source, maType, length, multiplier, useTrueRange, sticky)  returns percentrank and stats on historical kpercentk levels
  Parameters:
     source : Moving Average Source
     maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
     length : Moving Average Length
     multiplier : Standard Deviation multiplier
     useTrueRange : - if set to false, uses high-low.
     sticky : - sticky boundaries which will only change when value is outside boundary.
  Returns: percentile and related historical stats regarding Keltener Percent K
 dpercentd(useAlternateSource, alternateSource, length, sticky)  returns percentrank and stats on historical dpercentd levels
  Parameters:
     useAlternateSource : - Custom source is used only if useAlternateSource is set to true
     alternateSource : - Custom source
     length : - donchian channel length
     sticky : - sticky boundaries which will only change when value is outside boundary.
  Returns: percentile and related historical stats regarding Donchian Percent D
 oscillator(type, length, shortLength, longLength, source, highSource, lowSource, method, highlowLength, sticky)  oscillator - returns Choice of oscillator with custom overbought/oversold range
  Parameters:
     type : - oscillator type. Valid values : cci, cmo, cog, mfi, roc, rsi, stoch, tsi, wpr
     length : - Oscillator length - not used for TSI
     shortLength : - shortLength only used for TSI
     longLength : - longLength only used for TSI
     source : - custom source if required
     highSource : - custom high source for stochastic oscillator
     lowSource : - custom low source for stochastic oscillator
     method : - Valid values for method are : sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
     highlowLength : - length on which highlow of the oscillator is calculated
     sticky : - overbought, oversold levels won't change unless crossed
  Returns: percentile and related historical stats regarding oscillator
WIPNNetworkLibrary   "WIPNNetwork" 
 this is a work in progress (WIP) and prone to have some errors, so use at your own risk... 
let me know if you find any issues..
Method for a generalized Neural Network.
 network(x)  Generalized Neural Network Method.
  Parameters:
     x : TODO: add parameter x description here
  Returns: TODO: add what function returns
FunctionPatternDecompositionLibrary   "FunctionPatternDecomposition" 
Methods for decomposing price into common grid/matrix patterns.
 series_to_array(source, length)  Helper for converting series to array.
  Parameters:
     source : float, data series.
     length : int, size.
  Returns: float array.
 smooth_data_2d(data, rate)  Smooth data sample into 2d points.
  Parameters:
     data : float array, source data.
     rate : float, default=0.25, the rate of smoothness to apply.
  Returns: tuple with 2 float arrays.
 thin_points(data_x, data_y, rate)  Thin the number of points.
  Parameters:
     data_x : float array, points x value.
     data_y : float array, points y value.
     rate : float, default=2.0, minimum threshold rate of sample stdev to accept points.
  Returns: tuple with 2 float arrays.
 extract_point_direction(data_x, data_y)  Extract the direction each point faces.
  Parameters:
     data_x : float array, points x value.
     data_y : float array, points y value.
  Returns: float array.
 find_corners(data_x, data_y, rate)  ...
  Parameters:
     data_x : float array, points x value.
     data_y : float array, points y value.
     rate : float, minimum threshold rate of data y stdev.
  Returns: tuple with 2 float arrays.
 grid_coordinates(data_x, data_y, m_size)  transforms points data to a constrained sized matrix format.
  Parameters:
     data_x : float array, points x value.
     data_y : float array, points y value.
     m_size : int, default=10, size of the matrix.
  Returns: flat 2d pseudo matrix.
statisticsLibrary   "statistics" 
General statistics library.
 erf(x)  The "error function" encountered in integrating the normal
distribution (which is a normalized form of the Gaussian function).
  Parameters:
     x : The input series.
  Returns: The Error Function evaluated for each element of x.
 erfc(x)  
  Parameters:
     x : The input series
  Returns: The Complementary Error Function evaluated for each alement of x.
 sumOfReciprocals(src, len)  Calculates the sum of the reciprocals of the series.
For each element 'elem' in the series:
sum += 1/elem
Should the element be 0, the reciprocal value of 0 is used instead
of NA.
  Parameters:
     src : The input series.
     len : The length for the sum.
  Returns: The sum of the resciprocals of 'src' for 'len' bars back.
 mean(src, len)  The mean of the series.
(wrapper around ta.sma).
  Parameters:
     src : The input series.
     len : The length for the mean.
  Returns: The mean of 'src' for 'len' bars back.
 average(src, len)  The mean of the series.
(wrapper around ta.sma).
  Parameters:
     src : The input series.
     len : The length for the average.
  Returns: The average of 'src' for 'len' bars back.
 geometricMean(src, len)  The Geometric Mean of the series.
The geometric mean is most important when using data representing
percentages, ratios, or rates of change. It cannot be used for
negative numbers
Since the pure mathematical implementation generates a very large
intermediate result, we performed the calculation in log space.
  Parameters:
     src : The input series.
     len : The length for the geometricMean.
  Returns: The geometric mean of 'src' for 'len' bars back.
 harmonicMean(src, len)  The Harmonic Mean of the series.
The harmonic mean is most applicable to time changes and, along
with the geometric mean, has been used in economics for price
analysis. It is more difficult to calculate; therefore, it is less
popular than eiter of the other averages.
0 values are ignored in the calculation.
  Parameters:
     src : The input series.
     len : The length for the harmonicMean.
  Returns: The harmonic mean of 'src' for 'len' bars back.
 median(src, len)  The median of the series.
(a wrapper around ta.median)
  Parameters:
     src : The input series.
     len : The length for the median.
  Returns: The median of 'src' for 'len' bars back.
 variance(src, len, biased)  The variance of the series.
  Parameters:
     src : The input series.
     len : The length for the variance.
     biased : Wether to use the biased calculation (for a population), or the
unbiased calculation (for a sample set).  .
  Returns: The variance of 'src' for 'len' bars back.
 stdev(src, len, biased)  The standard deviation of the series.
  Parameters:
     src : The input series.
     len : The length for the stdev.
     biased : Wether to use the biased calculation (for a population), or the
unbiased calculation (for a sample set).  .
  Returns: The standard deviation of 'src' for 'len' bars back.
 skewness(src, len)  The skew of the series.
Skewness measures the amount of distortion from a symmetric
distribution, making the curve appear to be short on the left
(lower prices) and extended to the right (higher prices). The
extended side, either left or right is called the tail, and a
longer tail to the right is called positive skewness. Negative
skewness has the tail extending towards the left.
  Parameters:
     src : The input series.
     len : The length for the skewness.
  Returns: The skewness of 'src' for 'len' bars back.
 kurtosis(src, len)  The kurtosis of the series.
Kurtosis describes the peakedness or flatness of a distribution.
This can be used as an unbiased assessment of whether prices are
trending or moving sideways. Trending prices will ocver a wider
range and thus a flatter distribution (kurtosis < 3; negative
kurtosis). If prices are range-bound, there will be a clustering
around the mean and we have positive kurtosis (kurtosis > 3)
  Parameters:
     src : The input series.
     len : The length for the kurtosis.
  Returns: The kurtosis of 'src' for 'len' bars back.
 excessKurtosis(src, len)  The normalized kurtosis of the series.
kurtosis > 0 --> positive kurtosis --> trending
kurtosis < 0 --> negative krutosis --> range-bound
  Parameters:
     src : The input series.
     len : The length for the excessKurtosis.
  Returns: The excessKurtosis of 'src' for 'len' bars back.
 normDist(src, len, value)  Calculates the probability mass for the value according to the
src and length. It calculates the probability for value to be 
present in the normal distribution calculated for src and length.
  Parameters:
     src : The input series.
     len : The length for the normDist.
     value : The series of values to calculate the normal distance for
  Returns: The normal distance of 'value' to 'src' for 'len' bars back.
 normDistCumulative(src, len, value)  Calculates the cumulative probability mass for the value according
to the src and length. It calculates the cumulative probability for
value to  be present in the normal distribution calculated for src
and length.
  Parameters:
     src : The input series.
     len : The length for the normDistCumulative.
     value : The series of values to calculate the cumulative normal distance
for
  Returns: The cumulative normal distance of 'value' to 'src' for 'len' bars
back.
 zScore(src, len, value)  Returns then z-score of objective to the series src.
It returns the number of stdev's the objective is away from the
mean(src, len)
  Parameters:
     src : The input series.
     len : The length for the zScore.
     value : The series of values to calculate the cumulative normal distance
for
  Returns: The z-score of objectiv with respect to src and len.
 er(src, len)  Calculates the efficiency ratio of the series.
It measures the noise of the series. The lower the number, the
higher the noise.
  Parameters:
     src : The input series.
     len : The length for the efficiency ratio.
  Returns: The efficiency ratio of 'src' for 'len' bars back.
 efficiencyRatio(src, len)  Calculates the efficiency ratio of the series.
It measures the noise of the series. The lower the number, the
higher the noise.
  Parameters:
     src : The input series.
     len : The length for the efficiency ratio.
  Returns: The efficiency ratio of 'src' for 'len' bars back.
 fractalEfficiency(src, len)  Calculates the efficiency ratio of the series.
It measures the noise of the series. The lower the number, the
higher the noise.
  Parameters:
     src : The input series.
     len : The length for the efficiency ratio.
  Returns: The efficiency ratio of 'src' for 'len' bars back.
 mse(src, len)  Calculates the Mean Squared Error of the series.
  Parameters:
     src : The input series.
     len : The length for the mean squared error.
  Returns: The mean squared error of 'src' for 'len' bars back.
 meanSquaredError(src, len)  Calculates the Mean Squared Error of the series.
  Parameters:
     src : The input series.
     len : The length for the mean squared error.
  Returns: The mean squared error of 'src' for 'len' bars back.
 rmse(src, len)  Calculates the Root Mean Squared Error of the series.
  Parameters:
     src : The input series.
     len : The length for the root mean squared error.
  Returns: The root mean squared error of 'src' for 'len' bars back.
 rootMeanSquaredError(src, len)  Calculates the Root Mean Squared Error of the series.
  Parameters:
     src : The input series.
     len : The length for the root mean squared error.
  Returns: The root mean squared error of 'src' for 'len' bars back.
 mae(src, len)  Calculates the Mean Absolute Error of the series.
  Parameters:
     src : The input series.
     len : The length for the mean absolute error.
  Returns: The mean absolute error of 'src' for 'len' bars back.
 meanAbsoluteError(src, len)  Calculates the Mean Absolute Error of the series.
  Parameters:
     src : The input series.
     len : The length for the mean absolute error.
  Returns: The mean absolute error of 'src' for 'len' bars back.






















