Variety Distribution Probability Cone [Loxx]Variety Distribution Probability Cone forecasts price within a range of confidence using Geometric Brownian Motion (GBM) calculated using selected probability distribution, volatility, and drift. Below is detailed explanation of the inner workings of the indicator and the math involved. While normally this indicator would be used by options traders, this can also be used by regular directional traders who wish to observe a forecast of the confidence interval of possible prices over time.
What is a Random Walk
A random walk is a path which consists of a set of random steps. The starting point is zero and following movement may be one step to the left or to the right with equal probability. In the random walk process, there is no observable trend or pattern which are followed by the objects that is the movements are completely random. That is why the prices of a stock as it moves up and down can be modeled by random a walk process.
Stock Prices and Geometric Brownian Motion
Brownian motion, as first conceived by the botanist Robert Brown (1827), is a mathematical model used to describe random movements of small particles in a fluid or gas. These random movements are observed in the stock markets where the prices move up and down, randomly; hence, Brownian motion is considered as a mathematical model for stock prices.
P(exp(lnS0 + (mu + 1/2*sigma^2)t - z(0.05)*sigma*t^0.5) <= St <= exp(lnS0 + (mu + 1/2*sigma^2)t + z(0.05)*sigma*t^0.5)) = 0.95
Probability Distributions
Typically the normal distribution is used, but for our purposes here we extend this to Student t-distribution, Cauchy, Gaussian KDE, and Laplace
Student's t-Distribution
The probability density function of the Student’s t distribution is given by
g(x) = (L(v+1)/2) / L(v/2) * 1 / L(sqrt(v)) * (1 + x^2/v) ^ (-(v+1)/2)
with v degrees of freedom and v >= 0, denoted by X ~ t(v). The mean is 0 and the variance is v/(v-2). It is known that as v tends to infinity, the Student’s t-distribution tends to a standard normal probability density function, which has a variance of one. Blattberg and Gonedes were the first to propose that stock returns could be modeled by this distribution. (Blattberg and Gonedes, 1974) Platen and Sidorowicz later reaffirmed these findings.(Platen and Rendek, 2007) Finally, Cassidy, Hamp, and Ouyed used these findings to derive the Gosset formula, which is the Student t version of the Black-Scholes model.(Cassidy et al., 2010) They found that v = 2.65 provides the best fit when looking at the past 100 years of returns. They realized that as markets become more turbulent, the degrees of freedom should be adjusted to a smaller value.(Cassidy et al., 2010)
Cauchy Distribution
The probability density function of the Cauchy distribution is given by
f(x) = 1 / (theta*pi*(1 + ((x-n)/v)))
where n is the location parameter and theta is the scale parameter, for -infinity < x < infinity and is denoted by X ~ CAU(L,v). This model is similar to the normal distribution in that it is symmetric about zero, but the tails are fatter. This would mean that the probability of an extreme event occurring lies far out in the distributions tail. Using a crude example, if the normal distribution gave a probability of an extreme event occurring of 0.05% and the “best case” scenario of this event occurring 300 years, then using the Cauchy distribution one would find that the probability of occurring would be around 5% and now the “best case” scenario might have been reduced to only 63 years. Thus giving extreme events more of a likelihood of occurring. The mean, variance, and higher order moments are not defined (they are infinite); this implies that n and theta cannot be related to a mean and standard deviation. The Cauchy distribution is related to the Student’s t distribution T ~ CAU(1,0) when v = 1. In 1963, Benoit Mandelbrot was the first to suggest that stock returns follow a stable distribution, in particular, the Cauchy distribution.(Mandelbrot, 1963) His work was validated by Eugene Fama in 1965.(Fama, 1965) Recent research by Nassim Taleb came to the same conclusion as Mandelbrot, saying that stock returns follow a Cauchy distribution, as reported in his New York Times best-seller book “The Black Swan”.(Taleb, 2010)
Laplace Distribution
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.
The probability density function of the Cauchy distribution is given by
f(x) = 1/2b * exp(-|x-µ|/b)
Here, µ is a location parameter and b > 0, which is sometimes referred to as the "diversity", is a scale parameter. If µ = 0 and b=1, the positive half-line is exactly an exponential distribution scaled by 1/2.
The probability density function of the Laplace distribution is also reminiscent of the normal distribution; however, whereas the normal distribution is expressed in terms of the squared difference from the mean µ, the Laplace density is expressed in terms of the absolute difference from the mean. Consequently, the Laplace distribution has fatter tails than the normal distribution.
Gaussian Kernel Density Estimation
In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.
Let (x1, x2, ..., xn) be independent and identically distributed samples drawn from some univariate distribution with an unknown density f at any given point x. We are interested in estimating the shape of this function f. Its kernel density estimator is:
f(x) = 1/nh * sum(k(x-xi)/h, n)
where K is the kernel—a non-negative function—and h > 0 is a smoothing parameter called the bandwidth. A kernel with subscript h is called the scaled kernel and defined as Kh(x) = 1/h K(x/h). Intuitively one wants to choose h as small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance.
The probability density function of Gaussian Kernel Density Estimation is given by
f(x) = 1 / (v * 2*pi)^0.5 * exp(-(x - m)^2 / (2 * v))
where v is the bandwidth component h squared
KDE Bandwidth Estimation
Bandwidth selection strongly influences the estimate obtained from the KDE (much more so than the actual shape of the kernel). Bandwidth selection can be done by a "rule of thumb", by cross-validation, by "plug-in methods" or by other means. The default is Scott's Rule.
Scott's Rule
n ^ (-1/(d+4))
with n the number of data points and d the number of dimensions.
In the case of unequally weighted points, this becomes
neff^(-1/(d+4))
with neff the effective number of datapoints.
Silverman's Rule
(n * (d + 2) / 4)^(-1 / (d + 4))
or in the case of unequally weighted points:
(neff * (d + 2) / 4)^(-1 / (d + 4))
With a set of weighted samples, the effective number of datapoints neff
is defined by:
neff = sum(weights)^2 / sum(weights^2)
Manual input
You can provide your own bandwidth input. This is useful for those who wish to run external to TradingView Grid Search Machine Learning algorithms to solve for the bandwidth per ticker.
Inverse CDF of KDE Calculation
1. Create an array of random normalized numbers, using an inverse CDF of a normal distribution of mean of zero
and standard deviation one
2. Create a line space range of values -3 to 3
3. Create a Gaussian Kernel Density Estimate CDF by iterating over the line space array created in step 2. For each line space item, find the mean difference between the line space and the random variable divided by the bandwidth.
4. Derive test statistics from the resulting KDE inverse CDF, we use cubic spline interpolation to solve for line space value for a given alpha computed using the user selected probability percent value in the settings.
Volatility
Close-to-Close
Close-to-Close volatility is a classic and most commonly used volatility measure, sometimes referred to as historical volatility.
Volatility is an indicator of the speed of a stock price change. A stock with high volatility is one where the price changes rapidly and with a bigger amplitude. The more volatile a stock is, the riskier it is.
Close-to-close historical volatility calculated using only stock's closing prices. It is the simplest volatility estimator. But in many cases, it is not precise enough. Stock prices could jump considerably during a trading session, and return to the open value at the end. That means that a big amount of price information is not taken into account by close-to-close volatility.
Despite its drawbacks, Close-to-Close volatility is still useful in cases where the instrument doesn't have intraday prices. For example, mutual funds calculate their net asset values daily or weekly, and thus their prices are not suitable for more sophisticated volatility estimators.
Parkinson
Parkinson volatility is a volatility measure that uses the stock’s high and low price of the day.
The main difference between regular volatility and Parkinson volatility is that the latter uses high and low prices for a day, rather than only the closing price. That is useful as close to close prices could show little difference while large price movements could have happened during the day. Thus Parkinson's volatility is considered to be more precise and requires less data for calculation than the close-close volatility.
One drawback of this estimator is that it doesn't take into account price movements after market close. Hence it systematically undervalues volatility. That drawback is taken into account in the Garman-Klass's volatility estimator.
Garman-Klass
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Rogers-Satchell
Rogers-Satchell is an estimator for measuring the volatility of securities with an average return not equal to zero.
Unlike Parkinson and Garman-Klass estimators, Rogers-Satchell incorporates drift term (mean return not equal to zero). As a result, it provides a better volatility estimation when the underlying is trending.
The main disadvantage of this method is that it does not take into account price movements between trading sessions. It means an underestimation of volatility since price jumps periodically occur in the market precisely at the moments between sessions.
A more comprehensive estimator that also considers the gaps between sessions was developed based on the Rogers-Satchel formula in the 2000s by Yang-Zhang. See Yang Zhang Volatility for more detail.
Yang-Zhang
Yang Zhang is a historical volatility estimator that handles both opening jumps and the drift and has a minimum estimation error.
We can think of the Yang-Zhang volatility as the combination of the overnight (close-to-open volatility) and a weighted average of the Rogers-Satchell volatility and the day’s open-to-close volatility. It considered being 14 times more efficient than the close-to-close estimator.
Garman-Klass-Yang-Zhang
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Exponential Weighted Moving Average
The Exponentially Weighted Moving Average (EWMA) is a quantitative or statistical measure used to model or describe a time series. The EWMA is widely used in finance, the main applications being technical analysis and volatility modeling.
The moving average is designed as such that older observations are given lower weights. The weights fall exponentially as the data point gets older – hence the name exponentially weighted.
The only decision a user of the EWMA must make is the parameter lambda. The parameter decides how important the current observation is in the calculation of the EWMA. The higher the value of lambda, the more closely the EWMA tracks the original time series.
Standard Deviation of Log Returns
This is the simplest calculation of volatility. It's the standard deviation of ln(close/close(1))
Pseudo GARCH(2,2)
This is calculated using a short- and long-run mean of variance multiplied by θ.
θavg(var ;M) + (1 − θ)avg(var ;N) = 2θvar/(M+1-(M-1)L) + 2(1-θ)var/(M+1-(M-1)L)
Solving for θ can be done by minimizing the mean squared error of estimation; that is, regressing L^-1var - avg(var; N) against avg(var; M) - avg(var; N) and using the resulting beta estimate as θ.
Manual
User input % value
Drift
Cost of Equity / Required Rate of Return (CAPM)
Standard Capital Asset Pricing Model used to solve for Cost of Equity of Required Rate of Return. Due to the processor overhead required to compute CAPM, the user must plug in values for beta, alpha, and expected market return using Loxx's CAPM indicator series. Used for stocks.
Mean of Log Returns
Average of the log returns for the underlying ticker over the user selected period of evaluation. General purpose use.
Risk-free Rate (r)
10, 20, or 30 year bond yields for the user selected currency. Under equilibrium the drift of the empirical GBM must be the risk-free rate. If the price process is a GBM under the empirical measure, then a consequence of viability is that it is also a GBM under an equivalent (risk-neutral) measure.
Risk-free Rate adjusted for Dividends (r-q)
This is the Risk-free Rate minus the Dividend Yield.
Forex (r-rf)
This is derived from the Garman and Kohlhagen (1983) modified Black-Scholes model can be used to price European currency options. This is simply the diffeence between Risk-free Rate of the Forex currency in question. This is used for Forex pricing.
Martingale (0)
When the drift parameter is 0, geometric Brownian motion is a martingale. In probability theory, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values. Typically used for futures or margined futures.
Manual
User input % value
Additional notes
Indicator can be used on any timeframe. The T (time) variable used to annualize volatility and inside the GBM formula is automatically calculated based on the timeframe of the chart.
Confidence interval of volatility is calculated using an inverse CDF of a Chi-Squared Distribution. You change the volatility input used to create the probability cones from from realized volatility to upper or lower confidence levels of volatility to better visualize extremes of range. Generally, you'd stick with realized volatility.
Days per year should be 252 for everything but Cryptocurrency. These are days trader per year. Maximum future forecast bars is 365. Forecast bars are limited to the maximum of selected days per year.
Includes the ability to overlay option expiration dates by bars to see the range of prices for that date at that bar
You can select confidence % you wish for both the cone in general and the volatility. There are three levels for the cones, this will show on the three different levels up and down on the chart.
The table on the right displays important calculated values so you don't have to remember what they are or what settings you selected
All values are annualized no matter the timeframe.
Additional distributions and measures of volatility and drift will be added in future releases.
Probability
Bayesian BBSMA + nQQE Oscillator + Bank funds (whales detector)Three trend indicators in one. Fork of Gunslinger2005 indicator, with a fix to display the nQQE oscillator correctly and clearly, and converted to pinescript v5 (allowing to set a different timeframe and gaps).
How to use: Essentially, nQQE is a long term trend indicator which is more adequate in daily or weekly timeframe to indicate the current market cycle. Banker Fund seems better suited to indicate current local trend, although it is sensitive to relief rallies. Bayesian BBSMA is an awesome tool to visualize the buildup in bullish/bearish sentiment, and when it is more likely to get released, however it is unreliable, so it needs to be combined with other indicators.
Please show the original indicators some love:
Bayesian BBSMA:
nQQE:
L3 Banker Fund Flow Trend:
Originally mixed together by Gunslinger2005:
Probability Cloud BASIC [@AndorraInvestor]🔮☁️
This is the BASIC version of the PROBABILITY CLOUD indicator.
It is an evolution beyond traditional standard deviation probabilistic indicators only using bands or channels.
The new PROBABILITY CLOUD graphic representation with customizable transparent layers is based on -2 / +2 standard deviation calculated using 20 fixed predetermined time periods, and is available in several calculation MODES:
SMA , EMA , WMA , VWMA , VWMA & VAWMA
The indicator is designed to let the trader visually understand the probabilistic depth of past, present and future price action, and its evolution over time.
Looking forward to your comments and feedback to guide me on future updates!
🙏 Big THANKS @Electrified for letting me use his work on Deviation Bands/ as a starting point for my first script.
Breakout Probability (Expo)█ Overview
Breakout Probability is a valuable indicator that calculates the probability of a new high or low and displays it as a level with its percentage. The probability of a new high and low is backtested, and the results are shown in a table— a simple way to understand the next candle's likelihood of a new high or low. In addition, the indicator displays an additional four levels above and under the candle with the probability of hitting these levels.
The indicator helps traders to understand the likelihood of the next candle's direction, which can be used to set your trading bias.
█ Calculations
The algorithm calculates all the green and red candles separately depending on whether the previous candle was red or green and assigns scores if one or more lines were reached. The algorithm then calculates how many candles reached those levels in history and displays it as a percentage value on each line.
█ Example
In this example, the previous candlestick was green; we can see that a new high has been hit 72.82% of the time and the low only 28.29%. In this case, a new high was made.
█ Settings
Percentage Step
The space between the levels can be adjusted with a percentage step. 1% means that each level is located 1% above/under the previous one.
Disable 0.00% values
If a level got a 0% likelihood of being hit, the level is not displayed as default. Enable the option if you want to see all levels regardless of their values.
Number of Lines
Set the number of levels you want to display.
Show Statistic Panel
Enable this option if you want to display the backtest statistics for that a new high or low is made. (Only if the first levels have been reached or not)
█ Any Alert function call
An alert is sent on candle open, and you can select what should be included in the alert. You can enable the following options:
Ticker ID
Bias
Probability percentage
The first level high and low price
█ How to use
This indicator is a perfect tool for anyone that wants to understand the probability of a breakout and the likelihood that set levels are hit.
The indicator can be used for setting a stop loss based on where the price is most likely not to reach.
The indicator can help traders to set their bias based on probability. For example, look at the daily or a higher timeframe to get your trading bias, then go to a lower timeframe and look for setups in that direction.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Lune Market Analysis Premium- Version 0.9 -
Lune Algo was developed and built by Lune Trading, utilizing years of their trading expertise. This indicator works on all stocks, cryptos, indices, forex, futures , currencies, ETF's, energy and commodities. All the tools and features you need to assist you on your trading journey. Best of all, Lune Algo is easy to use and many of our tools and strategies have been thoroughly backtested thousands of times to ensure that users have the best experience possible.
Overview
Trade Dashboard—Provides information about the current market conditions, Such as if the market is trending up or down, how much volatility is in the market and even displays information about the current signal.
Trade Statistics—This tool gives you a breakdown of the Statistics of the current selected strategy based on backtests. It tells you the percentage of how often a Take Profit or Stop Loss was hit within a specific time period. Risk and Trade management is very important in trading, and can be the difference between a winning and losing strategy. So we believe that this was mandatory.
Current Features:
Advanced Buy and Sell Signals
Exclusive built-in Strategies
Lune Confidence AI
EK Clouds
Reversal Bands
Vray (Volume Ray)
Divergence Signals
Reversal Signals
Support/Resistance Zones
Built-in Themes
Built-in Risk Management system (take profit/stop loss)
Trade Statistics
Trade Assistance
Trade Dashboard
Advanced Settings
+ More coming soon, Big plans!
Features Breakdown:
Lune Confirmation—Used to help you confirm your trades and trend direction. It uses unique calculations, and its settings can be adjusted to allow traders to adapt the settings to fit their trading style.
Lune Confidence AI—All strategies are equipped with our exclusive built-in Confidence AI. This feature tells you how much confluence there is in a trade. It uses a rating system where signals are given a number from 0 to 5. A rating of 0 indicates that there is not a lot of confluence or confidence in the signal, while a rating of 5 indicates that there is a lot of confidence in the trade. This feature is not perfect and will be improved overtime.
Support/Resistance Zones—Calculates the most important support/resistance levels based on how many times a level has been used as support or resistance. Traders also refer to these as supply and demand zones and key levels.
EK Clouds—Used to further help you confirm trend and was optimized to also be used as support and resistance. This feature is powered by custom moving averages.
Reversal Bands—An optimized and improved version of the infamous Bollinger Bands. When price action takes place within the Reversal Bands it usually indicates that the current symbol is overextended and a reversal is possible.
Vray—Also Known as "Volume Ray", Assists you in better visualizing volume. This helps you find key levels and areas of support that you wouldn't be able to see otherwise. It helps you trade like the institutions.
This indicator's signals DO NOT REPAINT.
If you are using this script you acknowledge past performance is not necessarily indicative of future results and there are many more factors that go into being a profitable trader.
seasonThis script is meant to help verify the existence of a seasonal effect in asset returns, using a Z-test. There are three steps:
1. Think of a way to identify a season. The available methods are: by month, by week of the year, by day of the month, by day of the week, by hour of the day, and by minute of the hour.
2. Set the chart to the unit of your season. For example, if you want to check whether a crop commodity's harvest season has a seasonal implication, select "month". If you want to investigate the exchange's opening or close, select "hour".
3. Using the inputs, select the unit (e.g. "month", "dayofweek", "hour", etc.) and the range that identifies the season. The example natural gas chart has set "start" to 8 and "end" to 12 for September through December.
The test logic is as follows:
The "season" you select has a fixed length; for example, months eight through twelve has a length of four. This length is used to compute a sample mean, which is the mean return of all September-December periods in the chart. It is also used to calculate the mean/stdev of every other four-month period in the chart history. The latter is considered the "population." Using a Z-test, the script scores the difference between the sample returns and the population returns, and displays the results at two levels of significance (P = 0.05 and P = 0.01). The null hypothesis is "there is no difference between the seasonal periods and the population of ordinary periods". If the Z-score is sufficiently large or small, we can reject the null hypothesis and say that there is a seasonal effect at the given level of confidence. The output table will show green for a rejection of the null hypothesis (meaning there is a seasonal effect) or red of acceptance (there is no seasonal effect).
The seasonal periods that you have defined will be highlighted on the chart, so you can make sure they are correct. Additionally, the output table shows the mean, median, standard deviation, and top and bottom percentiles for both the seasonal and population samples.
Many news sites, twitter feeds, influences, etc. enjoy posting statistics about past returns, like "the stock market has gone up on this day 85 out of the past 100 years" and so on. Unfortunately, these posts don't tell you that many of these statistics are meaningless, as even totally random price fluctuations will cause many such interesting figures to occur. This script provides a limited means of testing some such seasonal effects so you can see if they are probably just random, or if they may have some meaning.
Note that Tradingview seems to use 1-based indexing for daily or higher timeframes, and 0-based indexing for intraday timeframes:
Months: 1-12
Weeks: 1-52
Days (of month): 1-31
Days (of week): 1-7
Hours (of day): 0-23
Minutes (of hour): 0-59
MathProbabilityDistributionLibrary "MathProbabilityDistribution"
Probability Distribution Functions.
name(idx) Indexed names helper function.
Parameters:
idx : int, position in the range (0, 6).
Returns: string, distribution name.
usage:
.name(1)
Notes:
(0) => 'StdNormal'
(1) => 'Normal'
(2) => 'Skew Normal'
(3) => 'Student T'
(4) => 'Skew Student T'
(5) => 'GED'
(6) => 'Skew GED'
zscore(position, mean, deviation) Z-score helper function for x calculation.
Parameters:
position : float, position.
mean : float, mean.
deviation : float, standard deviation.
Returns: float, z-score.
usage:
.zscore(1.5, 2.0, 1.0)
std_normal(position) Standard Normal Distribution.
Parameters:
position : float, position.
Returns: float, probability density.
usage:
.std_normal(0.6)
normal(position, mean, scale) Normal Distribution.
Parameters:
position : float, position in the distribution.
mean : float, mean of the distribution, default=0.0 for standard distribution.
scale : float, scale of the distribution, default=1.0 for standard distribution.
Returns: float, probability density.
usage:
.normal(0.6)
skew_normal(position, skew, mean, scale) Skew Normal Distribution.
Parameters:
position : float, position in the distribution.
skew : float, skewness of the distribution.
mean : float, mean of the distribution, default=0.0 for standard distribution.
scale : float, scale of the distribution, default=1.0 for standard distribution.
Returns: float, probability density.
usage:
.skew_normal(0.8, -2.0)
ged(position, shape, mean, scale) Generalized Error Distribution.
Parameters:
position : float, position.
shape : float, shape.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.ged(0.8, -2.0)
skew_ged(position, shape, skew, mean, scale) Skew Generalized Error Distribution.
Parameters:
position : float, position.
shape : float, shape.
skew : float, skew.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.skew_ged(0.8, 2.0, 1.0)
student_t(position, shape, mean, scale) Student-T Distribution.
Parameters:
position : float, position.
shape : float, shape.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.student_t(0.8, 2.0, 1.0)
skew_student_t(position, shape, skew, mean, scale) Skew Student-T Distribution.
Parameters:
position : float, position.
shape : float, shape.
skew : float, skew.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.skew_student_t(0.8, 2.0, 1.0)
select(distribution, position, mean, scale, shape, skew, log) Conditional Distribution.
Parameters:
distribution : string, distribution name.
position : float, position.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
shape : float, shape.
skew : float, skew.
log : bool, if true apply log() to the result.
Returns: float, probability.
usage:
.select('StdNormal', __CYCLE4F__, log=true)
Probability ConesA probability cone is an indicator that forecasts a statistical distribution from a set point in time into the future.
Features
Forecast a Standard or Laplace distribution.
Change the how many bars the cones will lookback and sample in their calculations.
Set how many bars to forecast the cones.
Let the cones follow price from a set number of bars back.
Anchor the cones and they will not update from their last location.
Show or hide any set of cones.
Change the deviation used of any cone's upper or lower line.
Change any line's color, style, or width.
Change or toggle the fill colors between any two cone lines.
Basic Interpretations
First, there is an assumption that the distribution starting from the cone's origin, based on the number of historical bars sampled, is likely to represent the distribution of future price.
Price typically hangs around the mean.
About 68% of price stays within the first deviation cones.
About 95% of price stays within the second deviation cones.
About 99.7% of price stays within the third deviation cones.
When price is between the first and second deviation cones, there is a higher probability for a reversal.
However, strong momentum while above or below the first deviation can indicate a trend where price maintains itself past the first deviation. For this reason it's recommended to use a momentum indicator alongside the cones.
There is no mean reversion assumption when price deviates. Price can continue to stay deviated.
It's recommended that the cones are placed at the beginning of calendar periods. Like the month, week, or day.
Be mindful when using the cones on various timeframes. As the lookback setting, which selects the number of bars back to load from the cone's origin, will load the number of bars back based on the current timeframe.
Second Deviation Strategy
How to react when price goes beyond the second deviation is contingent on your trading position.
If you are holding a losing trade and price has moved past the second deviation, it could be time to stop trading and exit.
If you are holding a winning trade and price has moved past the second deviation, it would be best to look at exit strategies to capitalize on the outperformance.
If price has moved beyond the second deviation and you hold no position, then do not open any new trades.
probability_of_touchBased on historical data (rather than theory), calculates the probability of a price level being "touched" within a given time frame. A "touch" means that price exceeded that level at some point. The parameters are:
- level: the "level" to be touched. it can be a number of points, percentage points, or standard deviations away from the mark price. a positive level is above the mark price, and a negative level is below the mark price.
- type: determines the meaning of the "level" parameter. "price" means price points (i.e. the numbers you see on the chart). "percentage" is expressed as a whole number, not a fraction. "stdev" means number of standard deviations, which is computed from recent realized volatlity.
- mark: the point from which the "level" is measured.
- length: the number of days within which the level must be touched.
- window: the number of days used to compute realized volatility. this parameter is only used when "type" is "stdev".
- debug: displays a fuchsia "X" over periods that touched the level. note that only a limited number of labels can be drawn.
- start: only include data after this time in the calculation.
- end: only include data before this time in the calculation.
Example: You want to know how many times Apple stock fell $1 from its closing price the next day, between 2020-02-26 and today. Use the following parameters:
level: -1
type: price
mark: close
length: 1
window:
debug:
start: 2020-02-26
end:
How does the script work? On every bar, the script looks back "length" days and sees if any day exceeded the "mark" price from "length" days ago, plus the limit. The probability is the ratio of such periods wherein price exceeded the limit to the total number of periods.
NEXT Regressive VWAPOverview:
This version of the Volume-Weighted Average Price (VWAP) indicator features an extended algorithm, which, in addition to volume and price, also incorporates regression analysis. The result is a more responsive, often leading VWAP slope with a degree of statistical predictability built in. Just like with the original VWAP, NEXT Regressive VWAP offers two optional Standard Deviation bands that parallel it. These can be set to any deviation level, with the default being 1 and -1, indicating one standard deviation above and one below Regressive VWAP, respectively.
Below is a screenshot comparing NEXT Regressive VWAP (green) to the original VWAP (blue) on CME_MINI:ES1! M3 chart.
Application and Strategy Ideas:
Price above NEXT Regressive VWAP is interpreted to have a bullish bias, and below, bearish. You can use TradingView's native Set Alert functionality to be notified, in real-time, when price crosses Regressive VWAP, and/or any of its standard deviation bands. Another popular "probability play" strategy is to scalp price when it crosses under the upper band (short) and crosses over the lower band (long). The screenshot below visualizes such a strategy on NASDAQ:QQQ M1 chart:
Input Parameters:
There are 3 groups of input.
Regression Settings
Length - controls the length of time (in bars) for regression analysis with higher values yielding smoother, more responsive values.
Regression Weighting - controls the degree of regression analysis incorporated into VWAP, with 5 being average, 0-4 less, 6-10 more. The higher the value, the more responsive the Regressive VWAP curve.
VWAP Settings
Anchor Period - controls the origin of VWAP calculations, start of session being the default.
Source - data used for calculating the VWAP, typically HLC/3, but can be used with other price formats and data sources as well.
Offset - shifting of the VWAP line forward (+) or backward (-).
Standard Deviation Bands Settings
Calculate Bands - checking this will add 2 bands, each equidistant (by the amount of Multiplier) from the NEXT Regressive VWAP line.
Bands Multiplier - standard deviation multiplier, with 1 being the default
Signals and Alerts:
Here is how to set price (close) crossing NEXT Regressive VWAP alerts: open a chart, attach NEXT Regressive VWAP, and right-click on chart -> Add Alert. Condition: Symbol e.g. ES (close) >> Crossing >> Regressive VWAP >> VWAP >> Once Per Bar Close.
FunctionProbabilityDistributionSamplingLibrary "FunctionProbabilityDistributionSampling"
Methods for probability distribution sampling selection.
sample(probabilities) Computes a random selected index from a probability distribution.
Parameters:
probabilities : float array, probabilities of sample.
Returns: int.
FunctionSMCMCLibrary "FunctionSMCMC"
Methods to implement Markov Chain Monte Carlo Simulation (MCMC)
markov_chain(weights, actions, target_path, position, last_value) a basic implementation of the markov chain algorithm
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
target_path : float array, target path array.
position : int, index of the path.
last_value : float, base value to increment.
Returns: void, updates target array
mcmc(weights, actions, start_value, n_iterations) uses a monte carlo algorithm to simulate a markov chain at each step.
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
start_value : float, base value to start simulation.
n_iterations : integer, number of iterations to run.
Returns: float array with path.
ProbabilityLibrary "Probability"
erf(value) Complementary error function
Parameters:
value : float, value to test.
Returns: float
ierf_mcgiles(value) Computes the inverse error function using the Mc Giles method, sacrifices accuracy for speed.
Parameters:
value : float, -1.0 >= _value >= 1.0 range, value to test.
Returns: float
ierf_double(value) computes the inverse error function using the Newton method with double refinement.
Parameters:
value : float, -1. > _value > 1. range, _value to test.
Returns: float
ierf(value) computes the inverse error function using the Newton method.
Parameters:
value : float, -1. > _value > 1. range, _value to test.
Returns: float
complement(probability) probability that the event will not occur.
Parameters:
probability : float, 0 >=_p >= 1, probability of event.
Returns: float
entropy_gini_impurity_single(probability) Gini Inbalance or Gini index for a given probability.
Parameters:
probability : float, 0>=x>=1, probability of event.
Returns: float
entropy_gini_impurity(events) Gini Inbalance or Gini index for a series of events.
Parameters:
events : float , 0>=x>=1, array with event probability's.
Returns: float
entropy_shannon_single(probability) Entropy information value of the probability of a single event.
Parameters:
probability : float, 0>=x>=1, probability value.
Returns: float, value as bits of information.
entropy_shannon(events) Entropy information value of a distribution of events.
Parameters:
events : float , 0>=x>=1, array with probability's.
Returns: float
inequality_chebyshev(n_stdeviations) Calculates Chebyshev Inequality.
Parameters:
n_stdeviations : float, positive over or equal to 1.0
Returns: float
inequality_chebyshev_distribution(mean, std) Calculates Chebyshev Inequality.
Parameters:
mean : float, mean of a distribution
std : float, standard deviation of a distribution
Returns: float
inequality_chebyshev_sample(data_sample) Calculates Chebyshev Inequality for a array of values.
Parameters:
data_sample : float , array of numbers.
Returns: float
intersection_of_independent_events(events) Probability that all arguments will happen when neither outcome
is affected by the other (accepts 1 or more arguments)
Parameters:
events : float , 0 >= _p >= 1, list of event probabilities.
Returns: float
union_of_independent_events(events) Probability that either one of the arguments will happen when neither outcome
is affected by the other (accepts 1 or more arguments)
Parameters:
events : float , 0 >= _p >= 1, list of event probabilities.
Returns: float
mass_function(sample, n_bins) Probabilities for each bin in the range of sample.
Parameters:
sample : float , samples to pool probabilities.
n_bins : int, number of bins to split the range
@return float
cumulative_distribution_function(mean, stdev, value) Use the CDF to determine the probability that a random observation
that is taken from the population will be less than or equal to a certain value.
Or returns the area of probability for a known value in a normal distribution.
Parameters:
mean : float, samples to pool probabilities.
stdev : float, number of bins to split the range
value : float, limit at which to stop.
Returns: float
transition_matrix(distribution) Transition matrix for the suplied distribution.
Parameters:
distribution : float , array with probability distribution. ex:.
Returns: float
diffusion_matrix(transition_matrix, dimension, target_step) Probability of reaching target_state at target_step after starting from start_state
Parameters:
transition_matrix : float , "pseudo2d" probability transition matrix.
dimension : int, size of the matrix dimension.
target_step : number of steps to find probability.
Returns: float
state_at_time(transition_matrix, dimension, start_state, target_state, target_step) Probability of reaching target_state at target_step after starting from start_state
Parameters:
transition_matrix : float , "pseudo2d" probability transition matrix.
dimension : int, size of the matrix dimension.
start_state : state at which to start.
target_state : state to find probability.
target_step : number of steps to find probability.
Probability MTF [Anan]█ OVERVIEW
Probability is simply how likely something is to happen.
Whenever we’re unsure about the outcome of an event, we can talk about the probabilities of certain outcomes—how likely they are.
The best example for understanding probability is flipping a coin, There are two possible outcomes—heads or tails..
In our case, the coin is (Green/Red) Candles
So:
Probability of an event = (# of ways it can happen) / (total number of outcomes)
P(A) = (# of ways A can happen) / (Total number of outcomes)
So:
The probability of the next candle is green (Up)= (# of past green candles) / (total both candles sum "length")
The probability of the next candle is red (Down)= (# of past red candles) / (total both candles sum "length")
█ FEATURES
- Fully control of Probability (Source / Length)
- Show / Hide / customize three rows of Probability.
- Multi-timeframe Table.
- Full control of displaying any row or any column.
- Full control of Table position and Size and Colors.
xGhozt Prophecies - A Forecast on the FuturexGhozt Prophecies - A Forecast on the Future, is an indicator based on past statistics and different dates.
The indicator goes back in time and checks all the candles of your selected time frame, and gives you the statistical potential outcome of the next candle. It has been created in order to anticipate potential violent moves from the markets when key dates arrive. On March 12, 2020, Bitcoin dropped by nearly 50% in one single day. Many crypto traders were left with a PTSD that emerged on March 11, 2021, as many anticipated another crash on March 12, 2021. Therefore I created this indicator to show you how a candle behaved in the past, on a certain date.
You can replicate the model on any given time frame, on any asset, and you can even pre-select important dates in the indicator settings box to keep an eye on these dates at any given time.
You can therefore check how an asset behave on Mondays, or on the last day of the month, or how the 1h candle behave on this asset, on a Tuesday. Many combinations are available.
Implied TargetThis script attempts to estimate the targets that the current price may reach based on an exponentially weighted volatility model.
Overall, with the assumption of normal distribution of log return, which might not always hold true, it calculates the estimated range within which the current candles will close. One, two, and three sigma will give the probability of around 68%, 95% and 99% respectively.
This can be used to give you a better sense of what is possible with the current level of volatility , thus assist in risk management and position sizing.
Like with any indicators, it is recommended that you use this script as a confirmation to your strategy, and not take the estimated range blindly to carry out, for instance, mean-reversion trade. Again, it is merely an estimation with volatility at its core.
May you be on the right side of the trade.
Probability Of Expiring ConeThis script attempts to give forecasts over the range of the closed price based on the exponentially weighted volatility.
Overall, with the assumption of normal distribution of log return, which might not always hold true, it calculates the approximate/ estimated probability that the current candles will close within the plotted shape. One, two, and three sigma will give the probability of around 68%, 95% and 99% respectively.
This can be used to give you a better sense of what is likely with the current level of volatility, thus assist in risk management and position sizing.
May you be on the right side of the trade.
Probability Distribution HistogramProbability Distribution Histogram
During data exploration it is often useful to plot the distribution of the data one is exploring. This indicator plots the distribution of data between different bins.
Essentially, what we do is we look at the min and max of the entire data set to determine its range. When we have the range of the data, we decide how many bins we want to divide this range into, so that the more bins we get, the smaller the range (a.k.a. width) for each bin becomes. We then place each data point in its corresponding bin, to see how many of the data points end up in each bin. For instance, if we have a data set where the smallest number is 5 and the biggest number is 105, we get a range of 100. If we then decide on 20 bins, each bin will have a width of 5. So the left-most bin would therefore correspond to values between 5 and 10, and the bin to the right would correspond to values between 10 and 15, and so on.
Once we have distributed all the data points into their corresponding bins, we compare the count in each bin to the total number of data points, to get a percentage of the total for each bin. So if we have 100 data points, and the left-most bin has 2 data points in it, that would equal 2%. This is also known as probability mass (or well, an approximation of it at least, since we're dealing with a bin, and not an exact number).
Usage
This is not an indicator that will give you any trading signals. This indicator is made to help you examine data. It can take any input you give it and plot how that data is distributed.
The indicator can transform the data in a few ways to help you get the most out of your data exploration. For instance, it is usually more accurate to use logarithmic data than raw data, so there is an option to transform the data using the natural logarithmic function. There is also an option to transform the data into %-Change form or by using data differencing.
Another option that the indicator has is the ability to trim data from the data set before plotting the distribution. This can help if you know there are outliers that are made up of corrupted data or data that is not relevant to your research.
I also included the option to plot the normal distribution as well, for comparison. This can be useful when the data is made up of residuals from a prediction model, to see if the residuals seem to be normally distributed or not.
Probability TableThe script is inspired by user NickbarComb, I suggested checking out his Price Convergence script.
Basically, this script plots a table containing the probability of the current candle closing either higher or lower based on user-define past period.
Hope that it will be helpful.
Function - Probability Chebyshev Inequalityfunction to calculate Chebyshev Inequality. wich can be used to compute the probability that we will diverge from what we expect to obtain.
reference:
- www.omnicalculator.com
- github.com
- statisticstopics.wordpress.com
- en.wikipedia.org
Function - Entropy Gini Indexfunction to retrieve Gini Impurity / Gini Index.
reference:
- victorzhou.com
- en.wikipedia.org
Function - Shannon Entropyfunctions for shannon's entropy
reference:
- en.wiktionary.org
- machinelearningmastery.com
test - event distributiondisplays the distribution of the outcome of a event over the last event.
similar to this script: