CFB-Adaptive Velocity Histogram [Loxx]CFB-Adaptive Velocity Histogram is a velocity indicator with One-More-Moving-Average Adaptive Smoothing of input source value and Jurik's Composite-Fractal-Behavior-Adaptive Price-Trend-Period input with Dynamic Zones. All Juirk smoothing allows for both single and double Jurik smoothing passes. Velocity is adjusted to pips but there is no input value for the user. This indicator is tuned for Forex but can be used on any time series data.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
3 signal variations w/ alerts
Divergences w/ alerts
Loxx's Expanded Source Types
Cari dalam skrip untuk "algo"
CFB-Adaptive, Williams %R w/ Dynamic Zones [Loxx]CFB-Adaptive, Williams %R w/ Dynamic Zones is a Jurik-Composite-Fractal-Behavior-Adaptive Williams % Range indicator with Dynamic Zones. These additions to the WPR calculation reduce noise and return a signal that is more viable than WPR alone.
What is Williams %R?
Williams %R , also known as the Williams Percent Range, is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R may be used to find entry and exit points in the market. The indicator is very similar to the Stochastic oscillator and is used in the same way. It was developed by Larry Williams and it compares a stock’s closing price to the high-low range over a specific period, typically 14 days or periods.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
3 signal variations w/ alerts
Divergences w/ alerts
Loxx's Expanded Source Types
Intermediate Williams %R w/ Discontinued Signal Lines [Loxx]Intermediate Williams %R w/ Discontinued Signal Lines is a Williams %R indicator with advanced options:
-Williams %R smoothing, 30+ smoothing algos found here:
-Williams %R signal, 30+ smoothing algos found here:
-DSL lines with smoothing or fixed overbought/oversold boundaries, smoothing algos are EMA and FEMA
-33 Expanded Source Type inputs including Heiken-Ashi and Heiken-Ashi Better, found here:
What is Williams %R?
Williams %R, also known as the Williams Percent Range, is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R may be used to find entry and exit points in the market. The indicator is very similar to the Stochastic oscillator and is used in the same way. It was developed by Larry Williams and it compares a stock’s closing price to the high-low range over a specific period, typically 14 days or periods.
Included:
-Toggle on/off bar coloring
-Toggle on/off signal line
OrdinaryLeastSquaresLibrary "OrdinaryLeastSquares"
One of the most common ways to estimate the coefficients for a linear regression is to use the Ordinary Least Squares (OLS) method.
This library implements OLS in pine. This implementation can be used to fit a linear regression of multiple independent variables onto one dependent variable,
as long as the assumptions behind OLS hold.
solve_xtx_inv(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
This function returns both the estimated OLS solution and a matrix that essentially measures the model stability (linear dependence between the columns of 'x').
NOTE: The latter is an intermediate step when estimating the OLS solution but is useful when calculating the covariance matrix and is returned here to save computation time
so that this step doesn't have to be calculated again when things like standard errors should be calculated.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns both the estimated OLS solution and a matrix that essentially measures the model stability (xtx_inv is equal to (X'X)^-1).
solve(x, y) Solve a linear system of equations using the Ordinary Least Squares method.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
Returns: Returns the estimated OLS solution.
standard_errors(x, y, beta_hat, xtx_inv) Calculate the standard errors.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
y : The matrix containing the dependent variable. This matrix can only contain one dependent variable and can therefore only contain one column. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
xtx_inv : This is (X'X)^-1, which means we take the transpose of the X matrix, multiply that the X matrix and then take the inverse of the result.
This essentially measures the linear dependence between the columns of the X matrix.
Returns: The standard errors.
estimate(x, beta_hat) Estimate the next step of a linear model.
Parameters:
x : The matrix containing the independent variables. Each column is regarded by the algorithm as one independent variable. The row count of 'x' and 'y' must match.
beta_hat : The Ordinary Least Squares (OLS) solution provided by solve_xtx_inv() or solve().
Returns: Returns the new estimate of Y based on the linear model.
NormalizedOscillatorsLibrary "NormalizedOscillators"
Collection of some common Oscillators. All are zero-mean and normalized to fit in the -1..1 range. Some are modified, so that the internal smoothing function could be configurable (for example, to enable Hann Windowing, that John F. Ehlers uses frequently). Some are modified for other reasons (see comments in the code), but never without a reason. This collection is neither encyclopaedic, nor reference, however I try to find the most correct implementation. Suggestions are welcome.
rsi2(upper, lower) RSI - second step
Parameters:
upper : Upwards momentum
lower : Downwards momentum
Returns: Oscillator value
Modified by Ehlers from Wilder's implementation to have a zero mean (oscillator from -1 to +1)
Originally: 100.0 - (100.0 / (1.0 + upper / lower))
Ignoring the 100 scale factor, we get: upper / (upper + lower)
Multiplying by two and subtracting 1, we get: (2 * upper) / (upper + lower) - 1 = (upper - lower) / (upper + lower)
rms(src, len) Root mean square (RMS)
Parameters:
src : Source series
len : Lookback period
Based on by John F. Ehlers implementation
ift(src) Inverse Fisher Transform
Parameters:
src : Source series
Returns: Normalized series
Based on by John F. Ehlers implementation
The input values have been multiplied by 2 (was "2*src", now "4*src") to force expansion - not compression
The inputs may be further modified, if needed
stoch(src, len) Stochastic
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
ssstoch(src, len) Super Smooth Stochastic (part of MESA Stochastic) by John F. Ehlers
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Introduced in the January 2014 issue of Stocks and Commodities
This is not an implementation of MESA Stochastic, as it is based on Highpass filter not present in the function (but you can construct it)
This implementation is scaled by 0.95, so that Super Smoother does not exceed 1/-1
I do not know, if this the right way to fix this issue, but it works for now
netKendall(src, len) Noise Elimination Technology by John F. Ehlers
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Introduced in the December 2020 issue of Stocks and Commodities
Uses simplified Kendall correlation algorithm
Implementation by @QuantTherapy:
rsi(src, len, smooth) RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
vrsi(src, len, smooth) Volume-scaled RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
This is my own version of RSI. It scales price movements by the proportion of RMS of volume
mrsi(src, len, smooth) Momentum RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Inspired by RocketRSI by John F. Ehlers (Stocks and Commodities, May 2018)
rrsi(src, len, smooth) Rocket RSI
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Inspired by RocketRSI by John F. Ehlers (Stocks and Commodities, May 2018)
Does not include Fisher Transform of the original implementation, as the output must be normalized
Does not include momentum smoothing length configuration, so always assumes half the lookback length
mfi(src, len, smooth) Money Flow Index
Parameters:
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
lrsi(src, in_gamma, len) Laguerre RSI by John F. Ehlers
Parameters:
src : Source series
in_gamma : Damping factor (default is -1 to generate from len)
len : Lookback period (alternatively, if gamma is not set)
Returns: Oscillator series
The original implementation is with gamma. As it is impossible to collect gamma in my system, where the only user input is length,
an alternative calculation is included, where gamma is set by dividing len by 30. Maybe different calculation would be better?
fe(len) Choppiness Index or Fractal Energy
Parameters:
len : Lookback period
Returns: Oscillator series
The Choppiness Index (CHOP) was created by E. W. Dreiss
This indicator is sometimes called Fractal Energy
er(src, len) Efficiency ratio
Parameters:
src : Source series
len : Lookback period
Returns: Oscillator series
Based on Kaufman Adaptive Moving Average calculation
This is the correct Efficiency ratio calculation, and most other implementations are wrong:
the number of bar differences is 1 less than the length, otherwise we are adding the change outside of the measured range!
For reference, see Stocks and Commodities June 1995
dmi(len, smooth) Directional Movement Index
Parameters:
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Based on the original Tradingview algorithm
Modified with inspiration from John F. Ehlers DMH (but not implementing the DMH algorithm!)
Only ADX is returned
Rescaled to fit -1 to +1
Unlike most oscillators, there is no src parameter as DMI works directly with high and low values
fdmi(len, smooth) Fast Directional Movement Index
Parameters:
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Same as DMI, but without secondary smoothing. Can be smoothed later. Instead, +DM and -DM smoothing can be configured
doOsc(type, src, len, smooth) Execute a particular Oscillator from the list
Parameters:
type : Oscillator type to use
src : Source series
len : Lookback period
smooth : Internal smoothing algorithm
Returns: Oscillator series
Chande Momentum Oscillator (CMO) is RSI without smoothing. No idea, why some authors use different calculations
LRSI with Fractal Energy is a combo oscillator that uses Fractal Energy to tune LRSI gamma, as seen here: www.prorealcode.com
doPostfilter(type, src, len) Execute a particular Oscillator Postfilter from the list
Parameters:
type : Oscillator type to use
src : Source series
len : Lookback period
Returns: Oscillator series
Average Down [Zeiierman]AVERAGING DOWN
Averaging down is an investment strategy that involves buying additional contracts of an asset when the price drops. This way, the investor increases the size of their position at discounted prices. The averaging down strategy is highly debated among traders and investors because it can either lead to huge losses or great returns. Nevertheless, averaging down is often used and favored by long-term investors and contrarian traders. With careful/proper risk management, averaging down can cover losses and magnify the returns when the asset rebounds. However, the main concern for a trader is that it can be hard to identify the difference between a pullback or the start of a new trend.
HOW DOES IT WORK
Averaging down is a method to lower the average price at which the investor buys an asset. A lower average price can help investors come back to break even quicker and, if the price continues to rise, get an even bigger upside and thus increase the total profit from the trade. For example, We buy 100 shares at $60 per share, a total investment of $6000, and then the asset drops to $40 per share; in order to come back to break even, the price has to go up 50%. (($60/$40) - 1)*100 = 50%.
The power of Averaging down comes into play if the investor buys additional shares at a lower price, like another 100 shares at $40 per share; the total investment is ($6000+$4000 = $10000). The average price for the investment is now $50. (($60 x 100) + ($40 x 100))/200; in order to get back to break even, the price has to rise 25% ($50/$40)-1)*100 = 25%, and if the price continues up to $60 per share, the investor can secure a profit at 16%. So by averaging down, investors and traders can cover the losses easier and potentially have more profit to secure at the end.
THE AVERAGE DOWN TRADINGVIEW TOOL
This script/indicator/trading tool helps traders and investors to get the average price of their position. The tool works for Long and Short and displays the entry price, average price, and the PnL in points.
HOW TO USE
Use the tool to calculate the average price of your long or short position in any market and timeframe.
Get the current PnL for the investment and keep track of your entry prices.
APPLY TO CHART
When you apply the tool on the chart, you have to select five entry points, and within the setting panel, you can choose how many of these five entry points are active and how many contracts each entry has. Then, the tool will display your average price based on the entries and the number of contracts used at each price level.
LONG
Set your entries and the number of contracts at each price level. The indicator will then display all your long entries and at what price you will break even. The entry line changes color based on if the entry is in profit or loss.
SHORT
Set your entries and the number of contracts at each price level. The indicator will then display all your short entries and at what price you will break even. The entry line changes color based on if the entry is in profit or loss.
-----------------
Disclaimer
Copyright by Zeiierman.
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual’s trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Example: Monte Carlo SimulationExperimental:
Example execution of Monte Carlo Simulation applied to the markets(this is my interpretation of the algo so inconsistencys may appear).
note:
the algorithm is very demanding so performance is limited.
RAT Moving Average Crossover StrategyThis is based on general moving average crossovers but some modifications made to generate buy sell signals.
Weis pip zigzag jayyWhat you see here is the Weis pip zigzag wave plotted directly on the price chart. This script is the companion to the Weis pip wave ( ) which is plotted in the lower panel of the displayed chart and can be used as an alternate way of plotting the same results. The Weis pip zigzag wave shows how far in terms of price a Weis wave has traveled through the duration of a Weis wave. The Weis pip zigzag wave is used in combination with the Weis cumulative volume wave. The two waves must be set to the same "wave size".
To use this script you must set the wave size. Using the traditional Weis method simply enter the desired wave size in the box "Select Weis Wave Size" In this example, it is set to 5. Each wave for each security and each timeframe requires its own wave size. Although not the traditional method a more automatic way to set wave size would be to use ATR. This is not the true Weis method but it does give you similar waves and, importantly, without the hassle described above. Once the Weis wave size is set then the pip wave will be shown.
I have put a pip zigzag of a 5 point Weis wave on the bar chart - that is a different script. I have added it to allow your eye to see what a Weis wave looks like. You will notice that the wave is not in straight lines connecting wave tops to bottoms this is a function of the limitations of Pinescript version 1. This script would need to be in version 4 to allow straight lines. There are too many calculations within this script to allow conversion to Pinescript version 4 or even Version 3. I am in the process of rewriting this script to reduce the number of calculations and streamline the algorithm.
The numbers plotted on the chart are calculated to be relative numbers. The script is limited to showing only three numbers vertically. Only the highest three values of a number are shown. For example, if the highest recent pip value is 12,345 only the first 3 numerals would be displayed ie 123. But suppose there is a recent value of 691. It would not be helpful to display 691 if the other wave size is shown as 123. To give the appropriate relative value the script will show a value of 7 instead of 691. This informs you of the relative magnitude of the values. This is done automatically within the script. There is likely no need to manually override the automatically calculated value. I will create a video that demonstrates the manual override method.
What is a Weis wave? David Weis has been recognized as a Wyckoff method analyst he has written two books one of which, Trades About to Happen, describes the evolution of the now popular Weis wave. The method employed by Weis is to identify waves of price action and to compare the strength of the waves on characteristics of wave strength. Chief among the characteristics of strength is the cumulative volume of the wave. There are other markers that Weis uses as well for example how the actual price difference between the start of the Weis wave from start to finish. Weis also uses time, particularly when using a Renko chart. Weis specifically uses candle or bar closes to define all wave action ie a line chart.
David Weis did a futures io video which is a popular source of information about his method.
This is the identical script with the identical settings but without the offending links. If you want to see the pip Weis method in practice then search Weis pip wave. If you want to see Weis chart in pdf then message me and I will give a link or the Weis pdf. Why would you want to see the Weis chart for May 27, 2020? Merely to confirm the veracity of my algorithm. You could compare my Weis chart here () from the same period to the David Weis chart from May 27. Both waves are for the ES!1 4 hour chart and both for a wave size of 5.
Price Action and 3 EMAs Momentum plus Sessions FilterThis indicator plots on the chart the parameters and signals of the Price Action and 3 EMAs Momentum plus Sessions Filter Algorithmic Strategy. The strategy trades based on time-series (absolute) and relative momentum of price close, highs, lows and 3 EMAs.
I am still learning PS and therefore I have only been able to write the indicator up to the Signal generation. I plan to expand the indicator to Entry Signals as well as the full Strategy.
The strategy works best on EURUSD in the 15 minutes TF during London and New York sessions with 1 to 1 TP and SL of 30 pips with lots resulting in 3% risk of the account per trade. I have already written the full strategy in another language and platform and back tested it for ten years and it was profitable for 7 of the 10 years with average profit of 15% p.a which can be easily increased by increasing risk per trade. I have been trading it live in that platform for over two years and it is profitable.
Contributions from experienced PS coders in completing the Indicator as well as writing the Strategy and back testing it on Trading View will be appreciated.
STRATEGY AND INDICATOR PARAMETERS
Three periods of 12, 48 and 96 in the 15 min TF which are equivalent to 3, 12 and 24 hours i.e (15 min * period / 60 min) are the foundational inputs for all the parameters of the PA & 3 EMAs Momentum + SF Algo Strategy and its Indicator.
3 EMAs momentum parameters and conditions
• FastEMA = ema of 12 periods
• MedEMA = ema of 48 periods
• SlowEMA = ema of 96 periods
• All the EMAs analyse price close for up to 96 (15 min periods) equivalent to 24 hours
• There’s Upward EMA momentum if price close > FastEMA and FastEMA > MedEMA and MedEMA > SlowEMA
• There’s Downward EMA momentum if price close < FastEMA and FastEMA < MedEMA and MedEMA < SlowEMA
PA momentum parameters and conditions
• HH = Highest High of 48 periods from 1st closed bar before current bar
• LL = Lowest Low of 48 periods from 1st closed bar from current bar
• Previous HH = Highest High of 84 periods from 12th closed bar before current bar
• Previous LL = Lowest Low of 84 periods from 12th closed bar before current bar
• All the HH & LL and prevHH & prevLL are within the 96 periods from the 1st closed bar before current bar and therefore indicative of momentum during the past 24 hours
• There’s Upward PA momentum if price close > HH and HH > prevHH and LL > prevLL
• There’s Downward PA momentum if price close < LL and LL < prevLL and HH < prevHH
Signal conditions and Status (BuySignal, SellSignal or Neutral)
• The strategy generates Buy or Sell Signals if both 3 EMAs and PA momentum conditions are met for each direction and these occur during the London and New York sessions
• BuySignal if price close > FastEMA and FastEMA > MedEMA and MedEMA > SlowEMA and price close > HH and HH > prevHH and LL > prevLL and timeinrange (LDN&NY) else Neutral
• SellSignal if price close < FastEMA and FastEMA < MedEMA and MedEMA < SlowEMA and price close < LL and LL < prevLL and HH < prevHH and timeinrange (LDN&NY) else Neutral
Entry conditions and Status (EnterBuy, EnterSell or Neutral)(NOT CODED YET)
• ENTRY IS NOT AT THE SIGNAL BAR but at the current bar tick price retracement to FastEMA after the signal
• EnterBuy if current bar tick price <= FastEMA and current bar tick price > prevHH at the time of the Buy Signal
• EnterSell if current bar tick price >= FastEMA and current bar tick price > prevLL at the time of the Sell Signal
NAND PerceptronExperimental NAND Perceptron based upon Python template that aims to predict NAND Gate Outputs. A Perceptron is one of the foundational building blocks of nearly all advanced Neural Network layers and models for Algo trading and Machine Learning.
The goal behind this script was threefold:
To prove and demonstrate that an ACTUAL working neural net can be implemented in Pine, even if incomplete.
To pave the way for other traders and coders to iterate on this script and push the boundaries of Tradingview strategies and indicators.
To see if a self-contained neural network component for parameter optimization within Pinescript was hypothetically possible.
NOTE: This is a highly experimental proof of concept - this is NOT a ready-made template to include or integrate into existing strategies and indicators, yet (emphasis YET - neural networks have a lot of potential utility and potential when utilized and implemented properly).
Hardcoded NAND Gate outputs with Bias column (X0):
// NAND Gate + X0 Bias and Y-true
// X0 // X1 // X2 // Y
// 1 // 0 // 0 // 1
// 1 // 0 // 1 // 1
// 1 // 1 // 0 // 1
// 1 // 1 // 1 // 0
Column X0 is bias feature/input
Column X1 and X2 are the NAND Gate
Column Y is the y-true values for the NAND gate
yhat is the prediction at that timestep
F0,F1,F2,F3 are the Dot products of the Weights (W0,W1,W2) and the input features (X0,X1,X2)
Learning rate and activation function threshold are enabled by default as input parameters
Uncomment sections for more training iterations/epochs:
Loop optimizations would be amazing to have for a selectable length for training iterations/epochs but I'm not sure if it's possible in Pine with how this script is structured.
Error metrics and loss have not been implemented due to difficulty with script length and iterations vs epochs - I haven't been able to configure the input parameters to successfully predict the right values for all four y-true values for the NAND gate (only been able to get 3/4; If you're able to get all four predictions to be correct, let me know, please).
// //---- REFERENCE for final output
// A3 := 1, y0 true
// B3 := 1, y1 true
// C3 := 1, y2 true
// D3 := 0, y3 true
PLEASE READ: Source article/template and main code reference:
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
Baseline-C [ID: AC-P]The "AC-P" version of jiehonglim's NNFX Baseline script is my personal customized version of the NNFX Baseline concept as part of the NNFX Algorithm stack/structure for 1D Trend Trading for Forex. Everget's JMA implementation is used for the baseline smoothing method, with optional ATR bands at 1.0x and 1.5x from the baseline.
NNFX = No Nonsense Forex
Baseline = Component of the NNFX Algorithm that consists of a single moving average
Baseline ---> Meant to be used in conjunction with ATR/C1/C2/Vol Indicator/Exit Indicator as per NNFX Algorithm setup/structure. C1 is 1st Confirmation Indicator, C2 is 2nd Confirmation Indicator.
JMA (Jurik Moving Average) is used for the baseline and slow baseline.
A slow baseline option is included, but disabled by default.
The faint orange/purple lines are 1.0x/1.5x ATR from the Baseline, and are what I use as potential TP/SL targets or to evaluate when to stay out of a trade (chop/missed entry/exit/other/ATR breach), depending on the trade setup (in conjunction with C1/C2/Vol Indicator/Exit Indicator)
This script is heavily based upon jiehonglim's NNFX Baseline script for signaling, barcoloring, and ATR.
SSL Channel option included but disabled by default (Erwinbeckers SSL component)
POC (Point of Control) from Volume Profile is included/enabled by default for both the current timeframe and 12HR timeframe
03.freeman's InfoPanel Divergence Indicator was used a reference to replace the current/previous ATR information infopanel/info draw from jiehonglim's script. I'm not sure whether I like the previous way ATR info was displayed vs how I have it currently, but it's something that is completely optional:
Specifically: I am tuning this baseline/indicator for 1D trading as part of the NNFX system, for Forex.
DO NOT USE THIS INDICATOR WITHOUT PROPER TUNING/ADJUSTMENT for your timeframe and asset class.
Note about lack of alerts:
Alerts for baseline crosses (and other crosses) have been purposefully omitted for this version upon initial publication. While getting alerts for baseline crosses under certain conditions/filtered conditions that eliminate low-importance signals and crossover whipsaw would be great, it's something I'm still looking into.
SPECIFICALLY: There are entry, exit, take profit, and continuation signal components in relation to the Baseline to the rest of the NNFX Algorithm stack (ATR/C1/C2/Vol Indicator/Exit Indicator), including but limited to the "1 candle rule" and the "7 candle rule" as per NNFX.
Implementing alerts that are significant that also factor in these rules while reducing alert spam/false signals would be ideal, but it's also the HTF/Daily chart - visually, entry/exit/continuation signal alignment is easy to spot when trading 1D - alerts may be redundant/a pursuit in diminishing returns (for now).
//-------------------------------------------------------------------
// Acknowledgements/Reference:
// jiehonglim, NNFX Baseline Script - Moving Averages
//
// Fractured, Many Moving Averages
//
// everget, Jurik Moving Average/JMA
//
// 03.freeman, InfoPanel Divergence Indicator
//
// Ggqmna Volume stops
//
// Libertus RSI Divs
//
// ChrisMoody, CM_Price-Action-Bars-Price Patterns That Work
//
// Erwinbeckers SSL Channel
//
Simulateur Carnet d'Ordres & Liquidité [Sese] - Custom🔹 Indicator Name
Order Book & Liquidity Simulator - Custom
🔹 Concept and Functionality
This indicator is a technical analysis tool designed to visually simulate market depth (Order Book) and potential liquidity zones.
It is important to adhere to TradingView's transparency rules: This script does not access real Level 2 data (the actual exchange order book). Instead, it uses a deductive algorithm based on historical Price Action to estimate where Buy Limit (Bid) and Sell Limit (Ask) orders might be resting.
Methodology used by the script:
Pivot Detection: The indicator scans for significant Swing Highs and Swing Lows over a user-defined lookback period (Length).
Level Projection: These pivots are projected to the right as horizontal lines.
Red Lines (Ask): Represent potential resistance zones (sellers).
Blue Lines (Bid): Represent potential support zones (buyers).
Liquidity Management (Absorption): The script is dynamic. If the current price crosses a line, the indicator assumes the liquidity at that level has been consumed (orders filled). The line is then automatically deleted from the chart.
Density Profile (Right Side): Horizontal bars appear to the right of the current price. These approximate a "Time Price Opportunity" or Volume Profile, showing where the market has spent the most time recently.
🔹 User Manual (Settings)
Here is how to configure the inputs to match your trading style:
1. Detection Algorithm
Lookback Length (Candles): Determines the sensitivity of the pivots.
Low value (e.g., 10): Shows many lines (scalping/short term).
High value (e.g., 50): Shows only major structural levels (swing trading).
Volume Factor: (Technical note: In this specific code version, this variable is calculated but the lines are primarily drawn based on geometric pivots).
2. Visual Settings
Show Price Lines (Bid/Ask): Toggles the horizontal Support/Resistance lines on or off.
Show Volume Profile: Toggles the heatmap-style bars on the right side of the chart.
Extend Lines: If checked, untouched lines will extend to the right towards the current price bar.
3. Colors and Transparency Management
Customize the aesthetics to keep your chart clean:
Bid / Ask Colors: Choose your base colors (Default is Blue and Red).
Line Transparency (%): Crucial for chart visibility.
0% = Solid, bright colors.
80-90% = Very subtle, faint lines (recommended if you overlay this on other tools).
Text Size: Adjusts the size of the price labels ("BUY LIMIT" / "SELL LIMIT").
🔹 How to Read the Indicator
Rejections: Unbroken lines act as potential walls. Watch for price reaction when approaching a blue line (support) or red line (resistance).
Breakouts/Absorption: When a line disappears, it means the level has been breached. The market may then seek the next liquidity level (the next line).
Density (Right-side boxes): More opaque/visible boxes indicate a price zone "accepted" by the market (consolidation). Empty gaps suggest an imbalance where price might move through quickly.
⚠️ Disclaimer
This script is for educational and technical analysis purposes only. It is a simulation based on price history, not real-time order book data. Past performance is not indicative of future results. Trading involves risk.
Fast Autocorrelation Estimator█ Overview:
The Fast ACF and PACF Estimation indicator efficiently calculates the autocorrelation function (ACF) and partial autocorrelation function (PACF) using an online implementation. It helps traders identify patterns and relationships in financial time series data, enabling them to optimize their trading strategies and make better-informed decisions in the markets.
█ Concepts:
Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay.
This indicator displays autocorrelation based on lag number. The autocorrelation is not displayed based over time on the x-axis. It's based on the lag number which ranges from 1 to 30. The calculations can be done with "Log Returns", "Absolute Log Returns" or "Original Source" (the price of the asset displayed on the chart).
When calculating autocorrelation, the resulting value will range from +1 to -1, in line with the traditional correlation statistic. An autocorrelation of +1 represents a perfect correlation (an increase seen in one time series leads to a proportionate increase in the other time series). An autocorrelation of -1, on the other hand, represents a perfect inverse correlation (an increase seen in one time series results in a proportionate decrease in the other time series). Lag number indicates which historical data point is autocorrelated. For example, if lag 3 shows significant autocorrelation, it means current data is influenced by the data three bars ago.
The Fast Online Estimation of ACF and PACF Indicator is a powerful tool for analyzing the linear relationship between a time series and its lagged values in TradingView. The indicator implements an online estimation of the Autocorrelation Function (ACF) and the Partial Autocorrelation Function (PACF) up to 30 lags, providing a real-time assessment of the underlying dependencies in your time series data. The Autocorrelation Function (ACF) measures the linear relationship between a time series and its lagged values, capturing both direct and indirect dependencies. The Partial Autocorrelation Function (PACF) isolates the direct dependency between the time series and a specific lag while removing the effect of any indirect dependencies.
This distinction is crucial in understanding the underlying relationships in time series data and making more informed decisions based on those relationships. For example, let's consider a time series with three variables: A, B, and C. Suppose that A has a direct relationship with B, B has a direct relationship with C, but A and C do not have a direct relationship. The ACF between A and C will capture the indirect relationship between them through B, while the PACF will show no significant relationship between A and C, as it accounts for the indirect dependency through B. Meaning that when ACF is significant at for lag 5, the dependency detected could be caused by an observation that came in between, and PACF accounts for that. This indicator leverages the Fast Moments algorithm to efficiently calculate autocorrelations, making it ideal for analyzing large datasets or real-time data streams. By using the Fast Moments algorithm, the indicator can quickly update ACF and PACF values as new data points arrive, reducing the computational load and ensuring timely analysis. The PACF is derived from the ACF using the Durbin-Levinson algorithm, which helps in isolating the direct dependency between a time series and its lagged values, excluding the influence of other intermediate lags.
█ How to Use the Indicator:
Interpreting autocorrelation values can provide valuable insights into the market behavior and potential trading strategies.
When applying autocorrelation to log returns, and a specific lag shows a high positive autocorrelation, it suggests that the time series tends to move in the same direction over that lag period. In this case, a trader might consider using a momentum-based strategy to capitalize on the continuation of the current trend. On the other hand, if a specific lag shows a high negative autocorrelation, it indicates that the time series tends to reverse its direction over that lag period. In this situation, a trader might consider using a mean-reversion strategy to take advantage of the expected reversal in the market.
ACF of log returns:
Absolute returns are often used to as a measure of volatility. There is usually significant positive autocorrelation in absolute returns. We will often see an exponential decay of autocorrelation in volatility. This means that current volatility is dependent on historical volatility and the effect slowly dies off as the lag increases. This effect shows the property of "volatility clustering". Which means large changes tend to be followed by large changes, of either sign, and small changes tend to be followed by small changes.
ACF of absolute log returns:
Autocorrelation in price is always significantly positive and has an exponential decay. This predictably positive and relatively large value makes the autocorrelation of price (not returns) generally less useful.
ACF of price:
█ Significance:
The significance of a correlation metric tells us whether we should pay attention to it. In this script, we use 95% confidence interval bands that adjust to the size of the sample. If the observed correlation at a specific lag falls within the confidence interval, we consider it not significant and the data to be random or IID (identically and independently distributed). This means that we can't confidently say that the correlation reflects a real relationship, rather than just random chance. However, if the correlation is outside of the confidence interval, we can state with 95% confidence that there is an association between the lagged values. In other words, the correlation is likely to reflect a meaningful relationship between the variables, rather than a coincidence. A significant difference in either ACF or PACF can provide insights into the underlying structure of the time series data and suggest potential strategies for traders. By understanding these complex patterns, traders can better tailor their strategies to capitalize on the observed dependencies in the data, which can lead to improved decision-making in the financial markets.
Significant ACF but not significant PACF: This might indicate the presence of a moving average (MA) component in the time series. A moving average component is a pattern where the current value of the time series is influenced by a weighted average of past values. In this case, the ACF would show significant correlations over several lags, while the PACF would show significance only at the first few lags and then quickly decay.
Significant PACF but not significant ACF: This might indicate the presence of an autoregressive (AR) component in the time series. An autoregressive component is a pattern where the current value of the time series is influenced by a linear combination of past values at specific lags.
Often we find both significant ACF and PACF, in that scenario simply and AR or MA model might not be sufficient and a more complex model such as ARMA or ARIMA can be used.
█ Features:
Source selection: User can choose either 'Log Returns' , 'Absolute Returns' or 'Original Source' for the input data.
Autocorrelation Selection: User can choose either 'ACF' or 'PACF' for the plot selection.
Plot Selection: User can choose either 'Autocorrelarrogram' or 'Historical Autocorrelation' for plotting the historical autocorrelation at a specified lag.
Max Lag: User can select the maximum number of lags to plot.
Precision: User can set the number of decimal points to display in the plot.
Linear Moments█ OVERVIEW
The Linear Moments indicator, also known as L-moments, is a statistical tool used to estimate the properties of a probability distribution. It is an alternative to conventional moments and is more robust to outliers and extreme values.
█ CONCEPTS
█ Four moments of a distribution
We have mentioned the concept of the Moments of a distribution in one of our previous posts. The method of Linear Moments allows us to calculate more robust measures that describe the shape features of a distribution and are anallougous to those of conventional moments. L-moments therefore provide estimates of the location, scale, skewness, and kurtosis of a probability distribution.
The first L-moment, λ₁, is equivalent to the sample mean and represents the location of the distribution. The second L-moment, λ₂, is a measure of the dispersion of the distribution, similar to the sample standard deviation. The third and fourth L-moments, λ₃ and λ₄, respectively, are the measures of skewness and kurtosis of the distribution. Higher order L-moments can also be calculated to provide more detailed information about the shape of the distribution.
One advantage of using L-moments over conventional moments is that they are less affected by outliers and extreme values. This is because L-moments are based on order statistics, which are more resistant to the influence of outliers. By contrast, conventional moments are based on the deviations of each data point from the sample mean, and outliers can have a disproportionate effect on these deviations, leading to skewed or biased estimates of the distribution parameters.
█ Order Statistics
L-moments are statistical measures that are based on linear combinations of order statistics, which are the sorted values in a dataset. This approach makes L-moments more resistant to the influence of outliers and extreme values. However, the computation of L-moments requires sorting the order statistics, which can lead to a higher computational complexity.
To address this issue, we have implemented an Online Sorting Algorithm that efficiently obtains the sorted dataset of order statistics, reducing the time complexity of the indicator. The Online Sorting Algorithm is an efficient method for sorting large datasets that can be updated incrementally, making it well-suited for use in trading applications where data is often streamed in real-time. By using this algorithm to compute L-moments, we can obtain robust estimates of distribution parameters while minimizing the computational resources required.
█ Bias and efficiency of an estimator
One of the key advantages of L-moments over conventional moments is that they approach their asymptotic normal closer than conventional moments. This means that as the sample size increases, the L-moments provide more accurate estimates of the distribution parameters.
Asymptotic normality is a statistical property that describes the behavior of an estimator as the sample size increases. As the sample size gets larger, the distribution of the estimator approaches a normal distribution, which is a bell-shaped curve. The mean and variance of the estimator are also related to the true mean and variance of the population, and these relationships become more accurate as the sample size increases.
The concept of asymptotic normality is important because it allows us to make inferences about the population based on the properties of the sample. If an estimator is asymptotically normal, we can use the properties of the normal distribution to calculate the probability of observing a particular value of the estimator, given the sample size and other relevant parameters.
In the case of L-moments, the fact that they approach their asymptotic normal more closely than conventional moments means that they provide more accurate estimates of the distribution parameters as the sample size increases. This is especially useful in situations where the sample size is small, such as when working with financial data. By using L-moments to estimate the properties of a distribution, traders can make more informed decisions about their investments and manage their risk more effectively.
Below we can see the empirical dsitributions of the Variance and L-scale estimators. We ran 10000 simulations with a sample size of 100. Here we can clearly see how the L-moment estimator approaches the normal distribution more closely and how such an estimator can be more representative of the underlying population.
█ WAYS TO USE THIS INDICATOR
The Linear Moments indicator can be used to estimate the L-moments of a dataset and provide insights into the underlying probability distribution. By analyzing the L-moments, traders can make inferences about the shape of the distribution, such as whether it is symmetric or skewed, and the degree of its spread and peakedness. This information can be useful in predicting future market movements and developing trading strategies.
One can also compare the L-moments of the dataset at hand with the L-moments of certain commonly used probability distributions. Finance is especially known for the use of certain fat tailed distributions such as Laplace or Student-t. We have built in the theoretical values of L-kurtosis for certain common distributions. In this way a person can compare our observed L-kurtosis with the one of the selected theoretical distribution.
█ FEATURES
Source Settings
Source - Select the source you wish the indicator to calculate on
Source Selection - Selec whether you wish to calculate on the source value or its log return
Moments Settings
Moments Selection - Select the L-moment you wish to be displayed
Lookback - Determine the sample size you wish the L-moments to be calculated with
Theoretical Distribution - This setting is only for investingating the kurtosis of our dataset. One can compare our observed kurtosis with the kurtosis of a selected theoretical distribution.
Support & Resistance Pro by 🅰🅻🅿Support & Resistance Pro by 🅰🅻🅿
A Multi-Layer Market Structure Engine for Professional Price Analysis
Support & Resistance Pro is a next-generation price structure algorithm designed to identify the most meaningful support and resistance levels across any market or timeframe.
Instead of relying on simple fractals, random pivots, or fixed-distance lines, this script analyzes the way price interacts with historical levels — including wick reactions, close rejections, structural pivots, retests, and liquidity sweeps.
The result is a clean, intelligent, and highly accurate market structure map that adapts to every style of trading.
🚀 Key Features
1. Multi-Layer S/R Engine (Up to 20 Dynamic Levels)
The algorithm computes and ranks up to 20 unique levels , from strongest to weakest.
Each level is scored using:
Structural pivot strength
Number of historical touches
Closeness of each interaction
Market memory & reaction weight
Breakout and retest behavior
This produces an objective hierarchy of price levels — ideal for scalping, day trading, or swing analysis.
2. Smart Strength Filter
To remove noise, the Smart Strength Filter evaluates how often price has interacted with each level and hides the ones that lack significance.
You can customize:
Lookback range
Minimum touch count
Touch tolerance sensitivity
This ensures your chart displays only the most relevant and reliable structural zones for the current environment.
3. Heat Map Intensity Coloring
Levels automatically change opacity based on their strength:
More touches → stronger color
Fewer touches → lighter color
This creates a natural visual heat map that highlights where market memory is strongest — perfect for identifying high-probability breakout or reversal zones.
4. Multi-Timeframe Compatibility
Project higher timeframe S/R onto lower timeframe charts to enhance confluence:
Day traders: render 4H levels on 5m–15m
Swing traders: render 1D levels on 1H
Scalpers: render 1H levels on 1m–3m
This gives you powerful structural awareness without switching charts.
5. Clean Visual Design
Every element has been designed to stay out of your way:
Choose your preferred level count (8–20)
Adjustable line thickness
Label sizing and offset controls
Optional price tags
Light or dark color-friendly styling
The visual layout is clean, modern, and tailored for long chart sessions.
6. Profile Presets for Every Trader
Four built-in trading profiles are included:
Scalp Mode
Reactive levels
Tight tolerance
Best for 1m–5m
Day Trade Mode
Balanced structure
Ideal for 5m–1H
Swing Mode
Broad pivots
Higher significance
Perfect for 4H–1D
Custom Mode
Full control over every parameter.
🎯 How Traders Use This
Identify major reversal zones
Find liquidity pockets before they form
Improve breakout accuracy
Locate fair-value areas for entries
Combine HTF structure with LTF setups
Simplify noise-heavy charts
Whether you’re looking for scalping precision or long-term structure, the indicator adapts instantly.
⚠️ Disclaimer
This script is intended for market analysis and educational purposes only.
It does not constitute financial advice.
Always backtest and verify settings before trading live markets.
🅐🅛🅟 – Author
Created with care, precision, and countless hours of testing by alpprofitmax.
Licensed under the Mozilla Public License 2.0.
ATR Based TMA Bands [NeuraAlgo]ATR-Based TMA Bands
ATR-Based TMA Bands is a volatility-adaptive channel system built around a smoothed Triangular Moving Average (TMA).
It identifies trend direction, momentum shifts, and reversal opportunities using a combination of TMA structure and ATR-driven channel expansion.
Perfect for traders who want a clean, intelligent, and adaptive market framework.
Made by NeuraAlgo.
🔷 How It Works
1. 🔹 TMA Midline (Core Trend)
The indicator builds a smooth and stable midline using:
📐 Triangular Moving Average
🔄 Additional EMA smoothing
This creates a low-noise trend curve that reacts cleanly to real momentum changes.
2. 📈 Volatility-Adjusted Bands
The channels are built from:
📊 Standard Deviation × Expansion Multiplier
📏 Three ATR-based outer layers
These bands:
Expand in high volatility
Contract in stable markets
Reveal pullbacks, breakout zones, and exhaustion points
3. 🔁 Trend Tilt Algorithm
Slope is measured using an ATR-normalized tilt formula:
atrBase = ta.atr(smoothLen)
tilt = (midline - midline ) / (0.1 * atrBase)
This classifies the trend into:
Bullish
Bearish
Neutral
The bar colors and midline adjust automatically to match market direction.
4. 🔄 Reversal Detection (Turn Signals)
The indicator flags directional flips:
Turn Up → bearish → bullish shift
Turn Down → bullish → bearish shift
These are early reversal alerts ideal for swing traders.
5. 🎯 Flip Buy / Flip Sell Signals
Deep volatility extensions create high-probability re-entry zones:
Flip Buy → price rebounds from oversold ATR zone
Flip Sell → price rejects from overbought ATR zone
Great for:
Mean-reversion entries
Trend re-tests
Pullback trades
Exhaustion signals
📌 How to Use This Indicator
✔ Trend Trading
Follow trend using tilt-colored candles
Use midline as dynamic trend filter
Use channels for breakout/pullback entries
✔ Reversal Trading
Watch for Turn Up / Turn Down labels
Flip signals show where the market is over-stretched
✔ Risk Management
ATR channels automatically adjust to volatility
Helps with smarter SL/TP placement
⭐ Best For
Trend traders
Swing traders
Reversal hunters
Volatility lovers
Anyone wanting a smart, clean technical framework
💡 Core Features
TMA-smoothed trend detection
Multi-layer ATR expansion channels
Intelligent trend tilt algorithm
Turn Up / Turn Down reversal markers
Flip Buy / Flip Sell exhaustion signals
Adaptive bar coloring
Clean and professional visual design
Hybrid Flow Master📊 Hybrid Flow Master - Professional Trading Indicator
Overview
Hybrid Flow Master is an advanced all-in-one trading indicator that combines Smart Money Concepts, institutional order flow analysis, and multi-timeframe confluence scoring to identify high-probability trade setups. Designed for both scalpers and swing traders across all markets (Forex, Crypto, Stocks, Indices).
🎯 Key Features
1. Intelligent Confluence System (0-100% Scoring) Proprietary scoring algorithm that weighs multiple factors Only signals when minimum confidence threshold is met
Real-time probability calculations for each setup Signal quality grading: A+, A, B, C ratings
2. Smart Money Concepts (SMC)
Automatic Order Block detection (bullish/bearish) Fair Value Gap (FVG) identification
Market structure analysis (Higher Highs, Lower Lows) Swing high/low tracking with visual markers
3. Multi-Timeframe Analysis
Higher timeframe trend filter for confluence Customizable HTF periods (1H, 4H, Daily, etc.)
Prevents counter-trend trades Aligns entries with major trends
4. Volume Flow Analysis
Volume spike detection with customizable thresholds Volume delta calculations (buying vs selling pressure) Institutional footprint identification Background highlighting for high-volume bars
5. Advanced Risk Management
ATR-based stop loss calculation Automatic take profit levels Customizable risk/reward ratios (1:1, 1:2, 1:3+) Visual SL/TP lines on chart Position sizing guidance
6. Professional Dashboard
Real-time HUD displaying:
Market bias (Bullish/Bearish/Neutral)
Higher timeframe trend status
Current confluence percentage
Volume status (Normal/High)
RSI reading with color coding
ATR volatility measure
Signal quality grade
7. Smart Alert System
Bullish confluence signals
Bearish confluence signals
Volume spike notifications
Customizable alert messages
Works with mobile app notifications
📈 What Makes It Unique?
✅ No Repainting - All signals are confirmed and final
✅ Probability-Based - Shows confidence level, not just binary signals
✅ Multi-Factor Confluence - Combines structure, volume, momentum, and HTF analysis
✅ Clean Interface - Toggle individual components on/off
✅ Works on All Timeframes - From 1-minute scalping to daily swing trading
✅ Universal Markets - Forex, Crypto, Stocks, Indices, Commodities
🎨 Customization Options
Adjustable swing detection length
Volume threshold settings
Minimum confluence score filter
Custom color schemes
Dashboard position (4 corners)
Show/hide individual components
Risk/reward ratio adjustment
ATR multiplier for stops
📊 Best Used For:
✔️ Scalping (1m - 15m charts)
✔️ Day Trading (15m - 1H charts)
✔️ Swing Trading (4H - Daily charts)
✔️ Trend Following
✔️ Reversal Trading
✔️ Breakout Trading
💡 How to Use:
Add indicator to chart - Works immediately with default settings Set your timeframe - Choose your trading style Wait for signals - Green BUY or Red SELL labels with confidence %
Check confluence score - Higher % = better quality setup Review dashboard - Confirm market bias and HTF trend Manage risk - Use provided SL/TP levels or adjust to your preference
Set alerts - Get notified of high-probability setups
⚙️ Recommended Settings:
For Scalping (1m-5m):
Swing Length: 5-7
Min Confluence: 70%
HTF: 15m or 1H
For Day Trading (15m-1H):
Swing Length: 10-15
Min Confluence: 60%
HTF: 4H or Daily
For Swing Trading (4H-Daily):
Swing Length: 15-20
Min Confluence: 50-60%
HTF: Weekly
📚 Indicator Components:
✦ Market Structure Detection
✦ Order Block Identification
✦ Fair Value Gaps (FVG)
✦ Volume Analysis
✦ RSI (14)
✦ MACD (12, 26, 9)
✦ ATR (14)
✦ Multi-Timeframe Trend
✦ Confluence Scoring Algorithm
🚀 Performance Notes:
Optimized for speed and efficiency Minimal CPU usage Clean chart presentation
Limited drawing objects (no chart clutter) Works on all TradingView plans
⚠️ Important Notes:
This indicator is a tool to assist trading decisions, not financial advice Always use proper risk management (1-2% per trade recommended) Backtest on your preferred market and timeframe
Combine with your own analysis and strategy Past performance does not guarantee future results
🔔 Alert Setup:
Right-click indicator name → "Add Alert" → Choose:
"Bullish Confluence Signal" for buy setups
"Bearish Confluence Signal" for sell setups
"Volume Spike Alert" for unusual activity
💬 Support:
For questions, suggestions, or custom modifications, feel free to message me directly through TradingView.
Filter Trend1. Indicator Name
Premium EMA Ribbon Filter (Pro Version)
(Advanced Trend & Momentum Filtering System Based on EMA Ribbons)
2. One-Line Introduction
A professional trend-analysis indicator that blends an advanced noise-filtering algorithm with an EMA ribbon system to extract only the pure bullish/bearish trend while smoothing out market noise.
3. Overall Description (7+ lines)
The Premium EMA Ribbon Filter is more than just a set of EMAs.
It analyzes the structure of a fast, medium, and slow EMA ribbon—along with the spacing and alignment between them—to determine whether the market is in a bullish trend, bearish trend, or a neutral/noise-heavy zone.
The core of this indicator is its noise-reduction algorithm and trend-strength calculation system.
Instead of relying on simple EMA cross signals, it evaluates how consistently the ribbon maintains bullish/bearish alignment over a specified period and highlights only strong trends with color coding, while weak or noisy areas are displayed in gray.
This helps traders avoid confusing or false signals and clearly focus only on the “meaningful zones.”
A Triple-Smoothing System is applied to create smoother, more refined ribbon movements, forming a stable “premium trend curve” that is less affected by short-term volatility.
As a result, this indicator works effectively for scalping, swing trading, and long-term trend following—staying true to the principle of removing noise and highlighting only the core market flow.
4. Short Advantages (6 items)
① Complete Noise Filtering
Using EMA ribbon comparison + tolerance logic, false reversals are largely eliminated, leaving only stable trend phases.
② Highly Readable Color System
Bullish trends are mint, bearish trends are red, and neutral/noise zones are gray—instantly visualizing market conditions.
③ Trend Strength Visualization
Not only trend direction but also trend strength is displayed via dynamic color transparency.
④ Smooth, Premium-Style Ribbon Design
Triple-smoothing creates a refined, luxury-level smoothness in movement.
⑤ Works Across All Timeframes
From 1-minute scalping to daily/weekly macro trend analysis.
⑥ Excellent Real-Trading Compatibility
Works extremely well when combined with ATR, SuperTrend, and volume-based indicators.
Indicator Manual (Required Section)
📌 Understanding the Core Concept
The indicator uses three EMAs (e.g., 20/50/100) arranged as a ribbon to analyze the structural alignment of the trend.
When the EMAs are cleanly aligned Top → Middle → Bottom, the market is in a bullish trend.
When aligned Bottom → Middle → Top, the market is in a bearish trend.
The indicator further evaluates the ribbon spread (gap) and the consistency of alignment to compute trend strength.
Noisy market conditions are shaded gray to clearly indicate “uncertain/indecisive” zones.
⚙️ Settings Description
Option Description
Fast EMA Most sensitive EMA; detects early trend signals
Mid EMA Stabilizes the primary trend direction
Slow EMA Defines the broader, long-term trend flow
Trend Lookback The period used to analyze trend strength
Noise Tolerance (%) Higher values = stronger noise removal
Smoothing Steps Controls how smooth the ribbon becomes
📈 Example Recognition
A bullish continuation/entry scenario forms when:
EMAs align in the order Fast → Mid → Slow (top side)
Ribbon color shifts into mint (strong bullish trend)
The ribbon begins to expand while price stays above the ribbon
📉 Example Recognition
A bearish continuation/entry occurs when:
EMAs align Fast → Mid → Slow (bottom side)
Ribbon color remains red
After contracting, the ribbon expands again during renewed downside strength
🧪 Recommended Usage
Combine with volume-based indicators (OBV, Volume Profile) → enhanced strong-trend detection
Use with SuperTrend or ATR Stop → clearer stop-loss placement
Combine with RSI/Stoch → avoid counter-trend entries in overheated conditions
Higher leverage traders should use higher tolerance settings
🔒 Cautions
EMA ribbons are trend-following tools; signals may weaken in ranging/sideways markets.
Never rely solely on this indicator—always confirm with volume, price patterns, or structure.
Very low Lookback values may cause excessive re-entry signals.
In high-volatility environments, ribbon spacing can contract/expand rapidly—use with caution.
Filter Wave1. Indicator Name
Filter Wave
2. One-line Introduction
A visually enhanced trend strength indicator that uses linear regression scoring to render smoothed, color-shifting waves synced to price action.
3. General Overview
Filter Wave+ is a trend analysis tool designed to provide an intuitive and visually dynamic representation of market momentum.
It uses a pairwise comparison algorithm on linear regression values over a lookback period to determine whether price action is consistently moving upward or downward.
The result is a trend score, which is normalized and translated into a color-coded wave that floats above or below the current price. The wave's opacity increases with trend strength, giving a visual cue for confidence in the trend.
The wave itself is not a raw line—it goes through a three-stage smoothing process, producing a natural, flowing curve that is aesthetically aligned with price movement.
This makes it ideal for traders who need a quick visual context before acting on signals from other tools.
While Filter Wave+ does not generate buy/sell signals directly, its secure and efficient design allows it to serve as a high-confidence trend filter in any trading system.
4. Key Advantages
🌊 Smooth, Dynamic Wave Output
3-stage smoothed curves give clean, flowing visual feedback on market conditions.
🎨 Trend Strength Visualized by Color Intensity
Stronger trends appear with more solid coloring, while weak/neutral trends fade visually.
🔍 Quantitative Trend Detection
Linear regression ordering delivers precise, math-based trend scoring for confidence assessment.
📊 Price-Synced Floating Wave
Wave is dynamically positioned based on ATR and price to align naturally with market structure.
🧩 Compatible with Any Strategy
No conflicting signals—Filter Wave+ serves as a directional overlay that enhances clarity.
🔒 Secure Core Logic
Core algorithm is lightweight and secure, with minimal code exposure and strong encapsulation.
📘 Indicator User Guide
📌 Basic Concept
Filter Wave+ calculates trend direction and intensity using linear regression alignment over time.
The resulting wave is rendered as a smoothed curve, colored based on trend direction (green for up, red for down, gray for neutral), and adjusted in transparency to reflect trend strength.
This allows for fast trend interpretation without overwhelming the chart with signals.
⚙️ Settings Explained
Lookback Period: Number of bars used for pairwise regression comparisons (higher = smoother detection)
Range Tolerance (%): Threshold to qualify as an up/down trend (lower = more sensitive)
Regression Source: The price input used in regression calculation (default: close)
Linear Regression Length: The period used for the core regression line
Bull/Bear Color: Customize the color for bullish and bearish waves
📈 Timing Example
Wave color changes to green and becomes more visible (less transparent)
Wave floats above price and aligns with an uptrend
Use as trend confirmation when other signals are present
📉 Timing Example
Wave shifts to red and darkens, floating below the price
Regression direction down; price continues beneath the wave
Acts as bearish confirmation for short trades or risk-off positioning
🧪 Recommended Use Cases
Use as a trend confidence overlay on your existing strategies
Especially useful in swing trading for detecting and confirming dominant market direction
Combine with RSI, MACD, or price action for high-accuracy setups
🔒 Precautions
This is not a signal generator—intended as a trend filter or directional guide
May respond slightly slower in volatile reversals; pair with responsive indicators
Wave position is influenced by ATR and price but does not represent exact entry/exit levels
Parameter optimization is recommended based on asset class and timeframe
Filter Bar1. Indicator Name
Filter Bar
2. One-line Introduction
A trend-aware bar coloring system that visualizes market direction and strength through adaptive transparency based on regression scoring.
3. General Overview
Filter Bar+ is a minimalist but powerful trend visualization tool that colors chart bars according to market direction and momentum strength.
It analyzes the linear regression trend alignment over a specified lookback period and uses a pairwise comparison algorithm to determine whether the market is in a bullish, bearish, or neutral state.
The result is a "trend score" that gets normalized to reflect trend intensity (0~1).
Bar colors are then dynamically updated using the specified bullish or bearish base colors, where higher intensity results in more opaque (darker) bars, and weaker trends lead to lighter, faded tones.
If no strong trend is detected, bars are shown in gray, signaling indecision or neutrality.
The strength of this indicator lies in its simplicity—it doesn’t draw lines, waves, or shapes, but overlays insight directly onto the chart through smart color cues.
It’s particularly effective as a background filter for price action traders, scalpers, and anyone who prefers clean charts but still wants embedded directional context.
4. Key Advantages
🎨 Adaptive Bar Coloring
Bar color opacity increases with trend strength, offering instant visual confirmation without clutter.
📊 Quantified Trend Direction
Uses a regression-based scoring system to reliably detect uptrends, downtrends, or sideways markets.
⚖️ Customizable Sensitivity
Parameters like lookback period and tolerance percentage give users full control over signal responsiveness.
🧼 Clean Chart Presentation
No lines, shapes, or overlays—just color-coded bars that blend into your existing chart setup.
🚀 Lightweight & Fast
Minimal computational load ensures it works smoothly even on lower-end devices or multiple chart setups.
🔒 Secure Internal Logic
Algorithm is neatly encapsulated and optimized, with no critical logic exposed.
📘 Indicator User Guide
📌 Basic Concept
Filter Bar+ evaluates trend direction and strength using a pairwise comparison of linear regression values.
The result determines whether the market is bullish, bearish, or neutral, and adjusts bar colors accordingly.
It visually amplifies the current market state without drawing any indicators on the chart.
⚙️ Settings Explained
Lookback Period: Number of bars used to compare regression values
Range Tolerance (%): Minimum score required to label a trend as bullish or bearish
Regression Source: Data input used for regression (default: close)
Linear Regression Length: Period for generating the base regression line
Bull/Bear Base Colors: Choose colors to represent bullish or bearish bars
📈 Buy Timing Example
Bars are green (or user-set bullish color) and becoming more vivid
Indicates a strengthening bullish trend; helpful when used alongside breakout confirmation or support zones
📉 Sell Timing Example
Bars turn red (or your custom bearish color) with increasing opacity
Signals growing bearish pressure; acts as confirmation during short setups or breakdowns
🧪 Recommended Use Cases
Combine with volume, RSI, or price action setups for direction filtering
Ideal for clean chart strategies where visual simplicity is preferred
Use as a confirmation layer to reduce noise in sideways markets
🔒 Precautions
This is a visual filter, not a signal generator—use alongside other strategies for entries/exits
In choppy markets, bars may flicker between colors—adjust sensitivity as needed
Works best when you already have a directional thesis and want to validate it visually
Always test settings for your asset/timeframe before applying in live trades
Static K-means Clustering | InvestorUnknownStatic K-Means Clustering is a machine-learning-driven market regime classifier designed for traders who want a data-driven structure instead of subjective indicators or manually drawn zones.
This script performs offline (static) K-means training on your chosen historical window. Using four engineered features:
RSI (Momentum)
CCI (Price deviation / Mean reversion)
CMF (Money flow / Strength)
MACD Histogram (Trend acceleration)
It groups past market conditions into K distinct clusters (regimes). After training, every new bar is assigned to the nearest cluster via Euclidean distance in 4-dimensional standardized feature space.
This allows you to create models like:
Regime-based long/short filters
Volatility phase detectors
Trend vs. chop separation
Mean-reversion vs. breakout classification
Volume-enhanced money-flow regime shifts
Full machine-learning trading systems based solely on regimes
Note:
This script is not a universal ML strategy out of the box.
The user must engineer the feature set to match their trading style and target market.
K-means is a tool, not a ready made system, this script provides the framework.
Core Idea
K-means clustering takes raw, unlabeled market observations and attempts to discover structure by grouping similar bars together.
// STEP 1 — DATA POINTS ON A COORDINATE PLANE
// We start with raw, unlabeled data scattered in 2D space (x/y).
// At this point, nothing is grouped—these are just observations.
// K-means will try to discover structure by grouping nearby points.
//
// y ↑
// |
// 12 | •
// | •
// 10 | •
// | •
// 8 | • •
// |
// 6 | •
// |
// 4 | •
// |
// 2 |______________________________________________→ x
// 2 4 6 8 10 12 14
//
//
//
// STEP 2 — RANDOMLY PLACE INITIAL CENTROIDS
// The algorithm begins by placing K centroids at random positions.
// These centroids act as the temporary “representatives” of clusters.
// Their starting positions heavily influence the first assignment step.
//
// y ↑
// |
// 12 | •
// | •
// 10 | • C2 ×
// | •
// 8 | • •
// |
// 6 | C1 × •
// |
// 4 | •
// |
// 2 |______________________________________________→ x
// 2 4 6 8 10 12 14
//
//
//
// STEP 3 — ASSIGN POINTS TO NEAREST CENTROID
// Each point is compared to all centroids.
// Using simple Euclidean distance, each point joins the cluster
// of the centroid it is closest to.
// This creates a temporary grouping of the data.
//
// (Coloring concept shown using labels)
//
// - Points closer to C1 → Cluster 1
// - Points closer to C2 → Cluster 2
//
// y ↑
// |
// 12 | 2
// | 1
// 10 | 1 C2 ×
// | 2
// 8 | 1 2
// |
// 6 | C1 × 2
// |
// 4 | 1
// |
// 2 |______________________________________________→ x
// 2 4 6 8 10 12 14
//
// (1 = assigned to Cluster 1, 2 = assigned to Cluster 2)
// At this stage, clusters are formed purely by distance.
Your chosen historical window becomes the static training dataset , and after fitting, the centroids never change again.
This makes the model:
Predictable
Repeatable
Consistent across backtests
Fast for live use (no recalculation of centroids every bar)
Static Training Window
You select a period with:
Training Start
Training End
Only bars inside this range are used to fit the K-means model. This window defines:
the market regime examples
the statistical distributions (means/std) for each feature
how the centroids will be positioned post-trainin
Bars before training = fully transparent
Training bars = gray
Post-training bars = full colored regimes
Feature Engineering (4D Input Vector)
Every bar during training becomes a 4-dimensional point:
This combination balances: momentum, volatility, mean-reversion, trend acceleration giving the algorithm a richer "market fingerprint" per bar.
Standardization
To prevent any feature from dominating due to scale differences (e.g., CMF near zero vs CCI ±200), all features are standardized:
standardize(value, mean, std) =>
(value - mean) / std
Centroid Initialization
Centroids start at diverse coordinates using various curves:
linear
sinusoidal
sign-preserving quadratic
tanh compression
init_centroids() =>
// Spread centroids across using different shapes per feature
for c = 0 to k_clusters - 1
frac = k_clusters == 1 ? 0.0 : c / (k_clusters - 1.0) // 0 → 1
v = frac * 2 - 1 // -1 → +1
array.set(cent_rsi, c, v) // linear
array.set(cent_cci, c, math.sin(v)) // sinusoidal
array.set(cent_cmf, c, v * v * (v < 0 ? -1 : 1)) // quadratic sign-preserving
array.set(cent_mac, c, tanh(v)) // compressed
This makes initial cluster spread “random” even though true randomness is hardly achieved in pinescript.
K-Means Iterative Refinement
The algorithm repeats these steps:
(A) Assignment Step, Each bar is assigned to the nearest centroid via Euclidean distance in 4D:
distance = sqrt(dx² + dy² + dz² + dw²)
(B) Update Step, Centroids update to the mean of points assigned to them. This repeats iterations times (configurable).
LIVE REGIME CLASSIFICATION
After training, each new bar is:
Standardized using the training mean/std
Compared to all centroids
Assigned to the nearest cluster
Bar color updates based on cluster
No re-training occurs. This ensures:
No lookahead bias
Clean historical testing
Stable regimes over time
CLUSTER BEHAVIOR & TRADING LOGIC
Clusters (0, 1, 2, 3…) hold no inherent meaning. The user defines what each cluster does.
Example of custom actions:
Cluster 0 → Cash
Cluster 1 → Long
Cluster 2 → Short
Cluster 3+ → Cash (noise regime)
This flexibility means:
One trader might have cluster 0 as consolidation.
Another might repurpose it as a breakout-loading zone.
A third might ignore 3 clusters entirely.
Example on ETHUSD
Important Note:
Any change of parameters or chart timeframe or ticker can cause the “order” of clusters to change
The script does NOT assume any cluster equals any actionable bias, user decides.
PERFORMANCE METRICS & ROC TABLE
The indicator computes average 1-bar ROC for each cluster in:
Training set
Test (live) set
This helps measure:
Cluster profitability consistency
Regime forward predictability
Whether a regime is noise, trend, or reversion-biased
EQUITY SIMULATION & FEES
Designed for close-to-close realistic backtesting.
Position = cluster of previous bar
Fees applied only on regime switches. Meaning:
Staying long → no fee
Switching long→short → fee applied
Switching any→cash → fee applied
Fee input is percentage, but script already converts internally.
Disclaimers
⚠️ This indicator uses machine-learning but does not predict the future. It classifies similarity to past regimes, nothing more.
⚠️ Backtest results are not indicative of future performance.
⚠️ Clusters have no inherent “bullish” or “bearish” meaning. You must interpret them based on your testing and your own feature engineering.
LibVeloLibrary "LibVelo"
This library provides a sophisticated framework for **Velocity
Profile (Flow Rate)** analysis. It measures the physical
speed of trading at specific price levels by relating volume
to the time spent at those levels.
## Core Concept: Market Velocity
Unlike Volume Profiles, which only answer "how much" traded,
Velocity Profiles answer "how fast" it traded.
It is calculated as:
`Velocity = Volume / Duration`
This metric (contracts per second) reveals hidden market
dynamics invisible to pure Volume or TPO profiles:
1. **High Velocity (Fast Flow):**
* **Aggression:** Initiative buyers/sellers hitting market
orders rapidly.
* **Liquidity Vacuum:** Price slips through a level because
order book depth is thin (low resistance).
2. **Low Velocity (Slow Flow):**
* **Absorption:** High volume but very slow price movement.
Indicates massive passive limit orders ("Icebergs").
* **Apathy:** Little volume over a long time. Lack of
interest from major participants.
## Architecture: Triple-Engine Composition
To ensure maximum performance while offering full statistical
depth for all metrics, this library utilises **object
composition** with a lazy evaluation strategy:
#### Engine A: The Master (`vpVol`)
* **Role:** Standard Volume Profile.
* **Purpose:** Maintains the "ground truth" of volume distribution,
price buckets, and ranges.
#### Engine B: The Time Container (`vpTime`)
* **Role:** specialized container for time duration (in ms).
* **Hack:** It repurposes standard volume arrays (specifically
`aBuy`) to accumulate time duration for each bucket.
#### Engine C: The Calculator (`vpVelo`)
* **Role:** Temporary scratchpad for derived metrics.
* **Purpose:** When complex statistics (like Value Area or Skewness)
are requested for **Velocity**, this engine is assembled
on-demand to leverage the full statistical power of `LibVPrf`
without rewriting complex algorithms.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
create(buckets, rangeUp, rangeLo, dynamic, valueArea, allot, estimator, cdfSteps, split, trendLen)
Construct a new `Velo` controller, initializing its engines.
Parameters:
buckets (int) : series int Number of price buckets ≥ 1.
rangeUp (float) : series float Upper price bound (absolute).
rangeLo (float) : series float Lower price bound (absolute).
dynamic (bool) : series bool Flag for dynamic adaption of profile ranges.
valueArea (int) : series int Percentage for Value Area (1..100).
allot (series AllotMode) : series AllotMode Allocation mode `Classic` or `PDF` (default `PDF`).
estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : series PriceEst PDF model for distribution attribution (default `Uniform`).
cdfSteps (int) : series int Resolution for PDF integration (default 20).
split (series SplitMode) : series SplitMode Buy/Sell split for the master volume engine (default `Classic`).
trendLen (int) : series int Look‑back for trend factor in dynamic split (default 3).
Returns: Velo Freshly initialised velocity profile.
method clone(self)
Create a deep copy of the composite profile.
Namespace types: Velo
Parameters:
self (Velo) : Velo Profile object to copy.
Returns: Velo A completely independent clone.
method clear(self)
Reset all engines and accumulators.
Namespace types: Velo
Parameters:
self (Velo) : Velo Profile object to clear.
Returns: Velo Cleared profile (chaining).
method merge(self, srcVolBuy, srcVolSell, srcTime, srcRangeUp, srcRangeLo, srcVolCvd, srcVolCvdHi, srcVolCvdLo)
Merges external data (Volume and Time) into the current profile.
Automatically handles resizing and re-bucketing if ranges differ.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
srcVolBuy (array) : array Source Buy Volume bucket array.
srcVolSell (array) : array Source Sell Volume bucket array.
srcTime (array) : array Source Time bucket array (ms).
srcRangeUp (float) : series float Upper price bound of the source data.
srcRangeLo (float) : series float Lower price bound of the source data.
srcVolCvd (float) : series float Source Volume CVD final value.
srcVolCvdHi (float) : series float Source Volume CVD High watermark.
srcVolCvdLo (float) : series float Source Volume CVD Low watermark.
Returns: Velo `self` (chaining).
method addBar(self, offset)
Main data ingestion. Distributes Volume and Time to buckets.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
offset (int) : series int Offset of the bar to add (default 0).
Returns: Velo `self` (chaining).
method setBuckets(self, buckets)
Sets the number of buckets for the profile.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
buckets (int) : series int New number of buckets.
Returns: Velo `self` (chaining).
method setRanges(self, rangeUp, rangeLo)
Sets the price range for the profile.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
rangeUp (float) : series float New upper price bound.
rangeLo (float) : series float New lower price bound.
Returns: Velo `self` (chaining).
method setValueArea(self, va)
Set the percentage of volume/time for the Value Area.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
va (int) : series int New Value Area percentage (0..100).
Returns: Velo `self` (chaining).
method getBuckets(self)
Returns the current number of buckets in the profile.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
Returns: series int The number of buckets.
method getRanges(self)
Returns the current price range of the profile.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
Returns:
rangeUp series float The upper price bound of the profile.
rangeLo series float The lower price bound of the profile.
method getArrayBuyVol(self)
Returns the internal raw data array for **Buy Volume** directly.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
Returns: array The internal array for buy volume.
method getArraySellVol(self)
Returns the internal raw data array for **Sell Volume** directly.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
Returns: array The internal array for sell volume.
method getArrayTime(self)
Returns the internal raw data array for **Time** (in ms) directly.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
Returns: array The internal array for time duration.
method getArrayBuyVelo(self)
Returns the internal raw data array for **Buy Velocity** directly.
Automatically executes _assemble() if data is dirty.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
Returns: array The internal array for buy velocity.
method getArraySellVelo(self)
Returns the internal raw data array for **Sell Velocity** directly.
Automatically executes _assemble() if data is dirty.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
Returns: array The internal array for sell velocity.
method getBucketBuyVol(self, idx)
Returns the **Buy Volume** of a specific bucket.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
idx (int) : series int The index of the bucket.
Returns: series float The buy volume.
method getBucketSellVol(self, idx)
Returns the **Sell Volume** of a specific bucket.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
idx (int) : series int The index of the bucket.
Returns: series float The sell volume.
method getBucketTime(self, idx)
Returns the raw accumulated time (in ms) spent in a specific bucket.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
idx (int) : series int The index of the bucket.
Returns: series float The time in milliseconds.
method getBucketBuyVelo(self, idx)
Returns the **Buy Velocity** (Aggressive Buy Flow) of a bucket.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
idx (int) : series int The index of the bucket.
Returns: series float The buy velocity in .
method getBucketSellVelo(self, idx)
Returns the **Sell Velocity** (Aggressive Sell Flow) of a bucket.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
idx (int) : series int The index of the bucket.
Returns: series float The sell velocity in .
method getBktBnds(self, idx)
Returns the price boundaries of a specific bucket.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
idx (int) : series int The index of the bucket.
Returns:
up series float The upper price bound of the bucket.
lo series float The lower price bound of the bucket.
method getPoc(self, target)
Returns Point of Control (POC) information for the specified target metric.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns:
pocIdx series int The index of the POC bucket.
pocPrice series float The mid-price of the POC bucket.
method getVA(self, target)
Returns Value Area (VA) information for the specified target metric.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns:
vaUpIdx series int The index of the upper VA bucket.
vaUpPrice series float The upper price bound of the VA.
vaLoIdx series int The index of the lower VA bucket.
vaLoPrice series float The lower price bound of the VA.
method getMedian(self, target)
Returns the Median price for the specified target metric distribution.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns:
medianIdx series int The index of the bucket containing the median.
medianPrice series float The median price.
method getAverage(self, target)
Returns the weighted average price (VWAP/TWAP) for the specified target.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns:
avgIdx series int The index of the bucket containing the average.
avgPrice series float The weighted average price.
method getStdDev(self, target)
Returns the standard deviation for the specified target distribution.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns: series float The standard deviation.
method getSkewness(self, target)
Returns the skewness for the specified target distribution.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns: series float The skewness.
method getKurtosis(self, target)
Returns the excess kurtosis for the specified target distribution.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns: series float The excess kurtosis.
method getSegments(self, target)
Returns the fundamental unimodal segments for the specified target metric.
Calculates on-demand if the target is 'Velocity' and data changed.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns: matrix A 2-column matrix where each row is an pair.
method getCvd(self, target)
Returns Cumulative Volume/Velo Delta (CVD) information for the target metric.
Namespace types: Velo
Parameters:
self (Velo) : Velo The profile object.
target (series Metric) : Metric The data aspect to analyse (Volume, Time, Velocity).
Returns:
cvd series float The final delta value.
cvdHi series float The historical high-water mark of the delta.
cvdLo series float The historical low-water mark of the delta.
Velo
Velo Composite Velocity Profile Controller.
Fields:
_vpVol (VPrf type from AustrianTradingMachine/LibVPrf/2) : LibVPrf.VPrf Engine A: Master Volume source.
_vpTime (VPrf type from AustrianTradingMachine/LibVPrf/2) : LibVPrf.VPrf Engine B: Time duration container (ms).
_vpVelo (VPrf type from AustrianTradingMachine/LibVPrf/2) : LibVPrf.VPrf Engine C: Scratchpad for velocity stats.
_aTime (array) : array Pointer alias to `vpTime.aBuy` (Time storage).
_valueArea (series float) : int Percentage of total volume to include in the Value Area (1..100)
_estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : LibBrSt.PriceEst PDF model for distribution attribution.
_allot (series AllotMode) : AllotMode Attribution model (Classic or PDF).
_cdfSteps (series int) : int Integration resolution for PDF.
_isDirty (series bool) : bool Lazy evaluation flag for vpVelo.






















