Forex Pips Tracker PinescriptlabsThis algorithm is exclusively designed for the Forex market 🌐 and serves as a tool to measure volatility, helping to determine on average how many pips positions move per hour. With this information, a trader can place take profit and stop loss orders with greater certainty, since they know the average pip movement range during each hour of the day.
What does it do and how does it work?
• Volatility measurement in pips 📊:
The algorithm calculates the size of the movement (or range) of each candle expressed in pips. To do this, it takes the difference between the highest and lowest price of each candle and converts it into pips.
👉
• Time zone adjustment ⏰:
It allows you to configure the time zone so that the data aligns with your desired schedule. This is especially useful for comparing movements at different times based on the trader's location.
• Analysis by time intervals 🕒:
The algorithm’s logic organizes the information for each hour of the day. It stores data for the current day, the previous day, weekly, and historically (200 candles). This allows you to see how volatility varies across different periods, providing a dynamic view of market behavior.
👉
• Directionality of movement 🔄:
In addition to averaging the pip range, the algorithm determines the predominant direction of each candle (bullish or bearish). This translates into visual indicators (like arrows) that help identify whether, on average, the movement during that hour tends to go up or down.
• Table visualization 📈:
Finally, the information is presented in an integrated table on the chart. Each row corresponds to an hour of the day and shows the average number of pips and the direction (bullish, bearish, or neutral) for each analyzed period. This table makes it easy to quickly and practically interpret the volatility data.
By combining these features, the algorithm becomes an essential tool for traders looking to better understand market dynamics and optimize their trading strategies! 💼✨
Español:
Este algoritmo está diseñado exclusivamente para el mercado Forex 🌐 y sirve como una herramienta para medir la volatilidad, ayudando a determinar en promedio cuántos pips se mueven las posiciones por hora. Con esta información, un trader puede colocar el take profit y el stop loss con mayor certeza, ya que conoce el rango promedio de movimiento en pips durante cada hora del día.
¿Qué hace y cómo funciona?
• Medición de volatilidad en pips 📊:
El algoritmo calcula el tamaño del movimiento (o rango) de cada vela expresado en pips. Para ello, toma la diferencia entre el precio máximo y el mínimo de cada vela y la convierte a pips.
👉
• Ajuste de zona horaria ⏰:
Permite configurar la zona horaria para que los datos se ajusten al horario deseado. Esto es especialmente útil para comparar movimientos durante distintas horas en función de la localización del trader.
• Análisis por intervalos de tiempo 🕒:
La lógica del algoritmo organiza la información por cada hora del día. Guarda datos para el día actual, el día anterior, a nivel semanal e histórico (200 velas). Esto permite ver cómo varía la volatilidad en diferentes periodos, proporcionando una visión dinámica del comportamiento del mercado.
👉
• Direccionalidad del movimiento 🔄:
Además de promediar el rango en pips, el algoritmo determina la dirección predominante de cada vela (alcista o bajista). Esto se traduce en indicadores visuales (como flechas) que permiten identificar si, en promedio, el movimiento en esa hora tiende a subir o bajar.
• Visualización en tabla 📈:
Finalmente, la información se presenta en una tabla integrada en el gráfico. Cada fila corresponde a una hora del día y muestra el promedio de pips y la dirección (alcista, bajista o neutral) para cada uno de los periodos analizados. Esta tabla facilita la interpretación rápida y práctica de los datos de volatilidad.
Al combinar estas funciones, el algoritmo se convierte en una herramienta esencial para traders que buscan entender mejor la dinámica del mercado y optimizar sus estrategias de trading! 💼✨
Cari dalam skrip untuk "algo"
Dual Zigzag [Trendoscope®]🎲 Dual Zigzag indicator is built on recursive zigzag algorithm. It is very similar to other zigzag indicators published by us and other authors. However, the key point here is, the indicator draws zigzag on both price and any other plot based indicator on separate layouts.
Before we get into the indicator, here are some brief descriptions of the underlying concepts and key terminologies
🎯 Zigzag
Zigzag indicator breaks down price or any input series into a series of Pivot Highs and Pivot Lows alternating between each other. Zigzags though shows pivot high and lows, should not be used for buying at low and selling at high. The main application of zigzag indicator is for the visualisation of market structure and this can be used as basic building block for any pattern recognition algorithms.
🎯 Recursive Zigzag Algorithm
Recursive zigzag algorithm builds zigzag on multiple levels and each level of zigzag is based on the previous level pivots. The level zero zigzag is built on price. However, for level 1, instead of price level 0 zigzag pivots are used. Similarly for level 2, level 1 zigzag pivots are used as base.
🎲 Components Dual Zigzag Indicator
Here are the components of Dual zigzag indicator
Built in Oscillator - Indicator has built in oscillator options for plotting RSI (Relative Strength Index), MFI (Money Flow Index), cci (Commodity Channel Index) , CMO (Chande Momentum Oscillator), COG (Center of Gravity), and ROC (Rate of Change). Apart from the given built in oscillators, users can also use a custom external output as base. The oscillators are not printed on the price pane. But, printed on a separate indicator overlay.
Zigzag On Oscillator - Recursive zigzag is calculated and printed on the oscillator series. Each pivot high and pivot low also prints a label having the retracement ratios, and price levels at those points. Zigzag on the oscillator is also printed on the indicator overlay pane.
Zigzag on Price - Recursive zigzag calculated based on price and printed on the price pane. This is made possible by using force_overlay option present in the drawing objects. At each zigzag pivot levels, the label having price retracement ratios, and oscillator values are printed.
It is called dual zigzag because, the indicator calculates the zigzag on both price and oscillator series of values and prints them separately on different panes on the chart.
🎲 Indicator Settings
Settings include
Theme display settings to get the right colour combination to match the background.
Zigzag settings to be used for zigzag calculation and display
Oscillator settings to chose the oscillator to be used as base for 2nd zigzag
🎲 Applications
Useful in spotting divergences with both indicator and price having their own zigzag to highlight pivots
Spotting patterns in indicators/oscillators and correlate them with the patterns on price
🎲 Using External Input
If users want to use an external indicator such as OBV instead of the built in oscillators, then can do so by using the custom option.
Here is how this can be done.
Step1. Add both Dual Zigzag and the intended indicator (in this case OBV) on the chart. Notice that both OBV and Dual zigzag appear on different panes.
Step2. Edit the indicator settings of Dual zigzag and set custom indicator by selecting "custom" as oscillator name and then by setting the custom external indicator name and input.
Step 3. You would notice that the zigzag in Dual Zigzag indictor pane is already showing the zigzag pivots based on the OBV indicator and the price pivots display obv values at the pivot points. We can leave this as is.
Step 4. As an additional step, you can also merge the OBV pane and the Dual zigzag indicator pane into one by going into OBV settings and moving the indicator to above pane. Merge the scales so that there is no two scales on the same pane and the entire scale appear on the right.
At the end, you should see two panes - one with price and other with OBV and both having their zigzag plotted.
Endpointed SSA of Price [Loxx]The Endpointed SSA of Price: A Comprehensive Tool for Market Analysis and Decision-Making
The financial markets present sophisticated challenges for traders and investors as they navigate the complexities of market behavior. To effectively interpret and capitalize on these complexities, it is crucial to employ powerful analytical tools that can reveal hidden patterns and trends. One such tool is the Endpointed SSA of Price, which combines the strengths of Caterpillar Singular Spectrum Analysis, a sophisticated time series decomposition method, with insights from the fields of economics, artificial intelligence, and machine learning.
The Endpointed SSA of Price has its roots in the interdisciplinary fusion of mathematical techniques, economic understanding, and advancements in artificial intelligence. This unique combination allows for a versatile and reliable tool that can aid traders and investors in making informed decisions based on comprehensive market analysis.
The Endpointed SSA of Price is not only valuable for experienced traders but also serves as a useful resource for those new to the financial markets. By providing a deeper understanding of market forces, this innovative indicator equips users with the knowledge and confidence to better assess risks and opportunities in their financial pursuits.
█ Exploring Caterpillar SSA: Applications in AI, Machine Learning, and Finance
Caterpillar SSA (Singular Spectrum Analysis) is a non-parametric method for time series analysis and signal processing. It is based on a combination of principles from classical time series analysis, multivariate statistics, and the theory of random processes. The method was initially developed in the early 1990s by a group of Russian mathematicians, including Golyandina, Nekrutkin, and Zhigljavsky.
Background Information:
SSA is an advanced technique for decomposing time series data into a sum of interpretable components, such as trend, seasonality, and noise. This decomposition allows for a better understanding of the underlying structure of the data and facilitates forecasting, smoothing, and anomaly detection. Caterpillar SSA is a particular implementation of SSA that has proven to be computationally efficient and effective for handling large datasets.
Uses in AI and Machine Learning:
In recent years, Caterpillar SSA has found applications in various fields of artificial intelligence (AI) and machine learning. Some of these applications include:
1. Feature extraction: Caterpillar SSA can be used to extract meaningful features from time series data, which can then serve as inputs for machine learning models. These features can help improve the performance of various models, such as regression, classification, and clustering algorithms.
2. Dimensionality reduction: Caterpillar SSA can be employed as a dimensionality reduction technique, similar to Principal Component Analysis (PCA). It helps identify the most significant components of a high-dimensional dataset, reducing the computational complexity and mitigating the "curse of dimensionality" in machine learning tasks.
3. Anomaly detection: The decomposition of a time series into interpretable components through Caterpillar SSA can help in identifying unusual patterns or outliers in the data. Machine learning models trained on these decomposed components can detect anomalies more effectively, as the noise component is separated from the signal.
4. Forecasting: Caterpillar SSA has been used in combination with machine learning techniques, such as neural networks, to improve forecasting accuracy. By decomposing a time series into its underlying components, machine learning models can better capture the trends and seasonality in the data, resulting in more accurate predictions.
Application in Financial Markets and Economics:
Caterpillar SSA has been employed in various domains within financial markets and economics. Some notable applications include:
1. Stock price analysis: Caterpillar SSA can be used to analyze and forecast stock prices by decomposing them into trend, seasonal, and noise components. This decomposition can help traders and investors better understand market dynamics, detect potential turning points, and make more informed decisions.
2. Economic indicators: Caterpillar SSA has been used to analyze and forecast economic indicators, such as GDP, inflation, and unemployment rates. By decomposing these time series, researchers can better understand the underlying factors driving economic fluctuations and develop more accurate forecasting models.
3. Portfolio optimization: By applying Caterpillar SSA to financial time series data, portfolio managers can better understand the relationships between different assets and make more informed decisions regarding asset allocation and risk management.
Application in the Indicator:
In the given indicator, Caterpillar SSA is applied to a financial time series (price data) to smooth the series and detect significant trends or turning points. The method is used to decompose the price data into a set number of components, which are then combined to generate a smoothed signal. This signal can help traders and investors identify potential entry and exit points for their trades.
The indicator applies the Caterpillar SSA method by first constructing the trajectory matrix using the price data, then computing the singular value decomposition (SVD) of the matrix, and finally reconstructing the time series using a selected number of components. The reconstructed series serves as a smoothed version of the original price data, highlighting significant trends and turning points. The indicator can be customized by adjusting the lag, number of computations, and number of components used in the reconstruction process. By fine-tuning these parameters, traders and investors can optimize the indicator to better match their specific trading style and risk tolerance.
Caterpillar SSA is versatile and can be applied to various types of financial instruments, such as stocks, bonds, commodities, and currencies. It can also be combined with other technical analysis tools or indicators to create a comprehensive trading system. For example, a trader might use Caterpillar SSA to identify the primary trend in a market and then employ additional indicators, such as moving averages or RSI, to confirm the trend and generate trading signals.
In summary, Caterpillar SSA is a powerful time series analysis technique that has found applications in AI and machine learning, as well as financial markets and economics. By decomposing a time series into interpretable components, Caterpillar SSA enables better understanding of the underlying structure of the data, facilitating forecasting, smoothing, and anomaly detection. In the context of financial trading, the technique is used to analyze price data, detect significant trends or turning points, and inform trading decisions.
█ Input Parameters
This indicator takes several inputs that affect its signal output. These inputs can be classified into three categories: Basic Settings, UI Options, and Computation Parameters.
Source: This input represents the source of price data, which is typically the closing price of an asset. The user can select other price data, such as opening price, high price, or low price. The selected price data is then utilized in the Caterpillar SSA calculation process.
Lag: The lag input determines the window size used for the time series decomposition. A higher lag value implies that the SSA algorithm will consider a longer range of historical data when extracting the underlying trend and components. This parameter is crucial, as it directly impacts the resulting smoothed series and the quality of extracted components.
Number of Computations: This input, denoted as 'ncomp,' specifies the number of eigencomponents to be considered in the reconstruction of the time series. A smaller value results in a smoother output signal, while a higher value retains more details in the series, potentially capturing short-term fluctuations.
SSA Period Normalization: This input is used to normalize the SSA period, which adjusts the significance of each eigencomponent to the overall signal. It helps in making the algorithm adaptive to different timeframes and market conditions.
Number of Bars: This input specifies the number of bars to be processed by the algorithm. It controls the range of data used for calculations and directly affects the computation time and the output signal.
Number of Bars to Render: This input sets the number of bars to be plotted on the chart. A higher value slows down the computation but provides a more comprehensive view of the indicator's performance over a longer period. This value controls how far back the indicator is rendered.
Color bars: This boolean input determines whether the bars should be colored according to the signal's direction. If set to true, the bars are colored using the defined colors, which visually indicate the trend direction.
Show signals: This boolean input controls the display of buy and sell signals on the chart. If set to true, the indicator plots shapes (triangles) to represent long and short trade signals.
Static Computation Parameters:
The indicator also includes several internal parameters that affect the Caterpillar SSA algorithm, such as Maxncomp, MaxLag, and MaxArrayLength. These parameters set the maximum allowed values for the number of computations, the lag, and the array length, ensuring that the calculations remain within reasonable limits and do not consume excessive computational resources.
█ A Note on Endpionted, Non-repainting Indicators
An endpointed indicator is one that does not recalculate or repaint its past values based on new incoming data. In other words, the indicator's previous signals remain the same even as new price data is added. This is an important feature because it ensures that the signals generated by the indicator are reliable and accurate, even after the fact.
When an indicator is non-repainting or endpointed, it means that the trader can have confidence in the signals being generated, knowing that they will not change as new data comes in. This allows traders to make informed decisions based on historical signals, without the fear of the signals being invalidated in the future.
In the case of the Endpointed SSA of Price, this non-repainting property is particularly valuable because it allows traders to identify trend changes and reversals with a high degree of accuracy, which can be used to inform trading decisions. This can be especially important in volatile markets where quick decisions need to be made.
LengthAdaptationCollection of dynamic length adaptation algorithms. Mostly from various Adaptive Moving Averages (they are usually just EMA otherwise). Now you can combine Adaptations with any other Moving Averages or Oscillators (see my other libraries), to get something like Deviation Scaled RSI or Fractal Adaptive VWMA. This collection is not encyclopaedic. Suggestions are welcome.
chande(src, len, sdlen, smooth, power) Chande's Dynamic Length
Parameters:
src : Series to use
len : Reference lookback length
sdlen : Lookback length of Standard deviation
smooth : Smoothing length of Standard deviation
power : Exponent of the length adaptation (lower is smaller variation)
Returns: Calculated period
Taken from Chande's Dynamic Momentum Index (CDMI or DYMOI), which is dynamic RSI with this length
Original default power value is 1, but I use 0.5
A variant of this algorithm is also included, where volume is used instead of price
vidya(src, len, dynLow) Variable Index Dynamic Average Indicator (VIDYA)
Parameters:
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
Returns: Calculated period
Standard VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
I took the adaptation part, as it is just an EMA otherwise
vidyaRS(src, len, dynHigh) Relative Strength Dynamic Length - VIDYA RS
Parameters:
src : Series to use
len : Reference lookback length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on Vitali Apirine's modification (Stocks and Commodities, January 2022) of VIDYA algorithm. The period oscillates from the Upper Bound down (fast)
I took the adaptation part, as it is just an EMA otherwise
kaufman(src, len, dynLow, dynHigh) Kaufman Efficiency Scaling
Parameters:
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on Efficiency Ratio calculation orifinally used in Kaufman Adaptive Moving Average developed by Perry J. Kaufman
I took the adaptation part, as it is just an EMA otherwise
ds(src, len) Deviation Scaling
Parameters:
src : Series to use
len : Reference lookback length
Returns: Calculated period
Based on Derivation Scaled Super Smoother (DSSS) by John F. Ehlers
Originally used with Super Smoother
RMS originally has 50 bar lookback. Changed to 4x length for better flexibility. Could be wrong.
maa(src, len, threshold) Median Average Adaptation
Parameters:
src : Series to use
len : Reference lookback length
threshold : Adjustment threshold (lower is smaller length, default: 0.002, min: 0.0001)
Returns: Calculated period
Based on Median Average Adaptive Filter by John F. Ehlers
Discovered and implemented by @cheatcountry:
I took the adaptation part, as it is just an EMA otherwise
fra(len, fc, sc) Fractal Adaptation
Parameters:
len : Reference lookback length
fc : Fast constant (default: 1)
sc : Slow constant (default: 200)
Returns: Calculated period
Based on FRAMA by John F. Ehlers
Modified to allow lower and upper bounds by an unknown author
I took the adaptation part, as it is just an EMA otherwise
mama(src, dynLow, dynHigh) MESA Adaptation - MAMA Alpha
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on MESA Adaptive Moving Average by John F. Ehlers
Introduced in the September 2001 issue of Stocks and Commodities
Inspired by the @everget implementation:
I took the adaptation part, as it is just an EMA otherwise
doAdapt(type, src, len, dynLow, dynHigh, chandeSDLen, chandeSmooth, chandePower) Execute a particular Length Adaptation from the list
Parameters:
type : Length Adaptation type to use
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
chandeSDLen : Lookback length of Standard deviation for Chande's Dynamic Length
chandeSmooth : Smoothing length of Standard deviation for Chande's Dynamic Length
chandePower : Exponent of the length adaptation for Chande's Dynamic Length (lower is smaller variation)
Returns: Calculated period (float, not limited)
doMA(type, src, len) MA wrapper on wrapper: if DSSS is selected, calculate it here
Parameters:
type : MA type to use
src : Series to use
len : Filtering length
Returns: Filtered series
Demonstration of a combined indicator: Deviation Scaled Super Smoother
DMI + HMA - No Risk ManagementDMI (Directional Movement Index) and HMA (Hull Moving Average)
The DMI and HMA make a great combination, The DMI will gauge the market direction, while the HMA will add confirmation to the trend strength.
What is the DMI?
The DMI is an indicator that was developed by J. Welles Wilder in 1978. The Indicator was designed to identify in which direction the price is moving. This is done by comparing previous highs and lows and drawing 2 lines.
1. A Positive movement line
2. A Negative movement line
A third line can be added, which would be known as the ADX line or Average Directional Index. This can also be used to gauge the strength in which direction the market is moving.
When the Positive movement line (DI+) is above the Negative movement line (DI-) there is more upward pressure. Ofcourse visa versa, when the DI- is above the DI+ that would indicate more downwards pressure.
Want to know more about HMA? Check out one of our other published scripts
What is this strategy doing?
We are first waiting for the DMI to cross in our favoured direction, after that, we wait for the HMA to signal the entry. Without both conditions being true, no trade will be made.
Long Entries
1. DI+ crosses above DI-
2. HMA line 1 is above HMA line 2
Short Entries
1. DI- Crosses above DI+
2. HMA line 1 is below HMA lilne 2
Its as simple as that.
Conclusion
While this strategy does have its downsides, that can be reduced by adding some risk manegment into the script. In general the trade profitability is above average, And the max drawdown is at a minimum.
The settings have been optimised to suite BTCUSDT PERP markets. Though with small adjustments it can be used on many assets!
SMU Stock ThermometerThis script shows various technical indicators in a stacked vertical candle called Market Termometer.
It helps to see the price action in one single vertical column where the actual price moves up or down. So you can see the price change based on your custom setting levels.
I've been studying ALGO for over a year and made many live experiment trades long and shorts. So, I'm trying to find a way to see what is ALGos next move. If it sounds far-fetch, then you should see my other published scripts.
Here is example of how ALGo dance around old indicators, which is why I started creating a bunch of new indicators that ALGO doesn't know
Example:
Impact-driven-algorithm= Large volume masked as small volume to keep the price at desired level. So, your chart says overbought but market doesn't drop for days
Cost-driven-algorithm= Hedge fund buy every time at lower price and prevent others to buy low, moving up fast. Is like a clock with millisecond timing and ALGO owners know when to buy low and when to sell high
If you have a good idea, let me know so i can include it the future versions.
Enjoy and think outside the box, the only way to beat the ALGO
ICT Opening Range Projections (tristanlee85)ICT Opening Range Projections
This indicator visualizes key price levels based on ICT's (Inner Circle Trader) "Opening Range" concept. This 30-minute time interval establishes price levels that the algorithm will refer to throughout the session. The indicator displays these levels, including standard deviation projections, internal subdivisions (quadrants), and the opening price.
🟪 What It Does
The Opening Range is a crucial 30-minute window where market algorithms establish significant price levels. ICT theory suggests this range forms the basis for daily price movement.
This script helps you:
Mark the high, low, and opening price of each session.
Divide the range into quadrants (premium, discount, and midpoint/Consequent Encroachment).
Project potential price targets beyond the range using configurable standard deviation multiples .
🟪 How to Use It
This tool aids in time-based technical analysis rooted in ICT's Opening Range model, helping you observe price interaction with algorithmic levels.
Example uses include:
Identifying early structural boundaries.
Observing price behavior within premium/discount zones.
Visualizing initial displacement from the range to anticipate future moves.
Comparing price reactions at projected standard deviation levels.
Aligning price action with significant times like London or NY Open.
Note: This indicator provides a visual framework; it does not offer trade signals or interpretations.
🟪 Key Information
Time Zone: New York time (ET) is required on your chart.
Sessions: Supports multiple sessions, including NY midnight, NY AM, NY PM, and three custom timeframes.
Time Interval: Supports multi-timeframe up to 15 minutes. Best used on a 1-minute chart for accuracy.
🟪 Session Options
The Opening Range interval is configurable for up to 6 sessions:
Pre-defined ICT Sessions:
NY Midnight: 12:00 AM – 12:30 AM ET
NY AM: 9:30 AM – 10:00 AM ET
NY PM: 1:30 PM – 2:00 PM ET
Custom Sessions:
Three user-defined start/end time pairs.
This example shows a custom session from 03:30 - 04:00:
🟪 Understanding the Levels
The Opening Price is the open of the first 1-minute candle within the chosen session.
At session close, the Opening Range is calculated using its High and Low . An optional swing-based mode uses swing highs/lows for range boundaries.
The range is divided into quadrants by its midpoint ( Consequent Encroachment or CE):
Upper Quadrant: CE to high (premium).
Lower Quadrant: Low to CE (discount).
These subdivisions help visualize internal range dynamics, where price often reacts during algorithmic delivery.
🟪 Working with Ranges
By default, the range is determined by the highest high and lowest low of the 30-minute session:
A range can also be determined by the highest/lowest swing points:
Quadrants outline the premium and discount of a range that price will reference:
Small ranges still follow the same algorithmic logic, but may be deemed insignificant for one's trading. These can be filtered in the settings by specifying a minimum ticks limit. In this example, the range is 42 ticks (10.5 points) but the indicator is configured for 80 ticks (20 points). We can select which levels will plot if the range is below the limit. Here, only the 00:00 opening price is plotted:
You may opt to include the range high/low, quadrants, and projections as well. This will plot a red (configurable) range bracket to indicate it is below the limit while plotting the levels:
🟪 Price Projections
Projections extend beyond the Opening Range using standard deviations, framing the market beyond the initial session and identifying potential targets. You define the standard deviation multiples (e.g., 1.0, 1.5, 2.0).
Both positive and negative extensions are displayed, symmetrically projected from the range's high and low.
The Dynamic Levels option plots only the next projection level once price crosses the previous extreme. For example, only the 0.5 STDEV level plots until price reaches it, then the 1.0 level appears, and so on. This continues up to your defined maximum projections, or indefinitely if standard deviations are set to 0.
This example shows dynamic levels for a total of 6 sessions, only 1 of which meet a configured minimum limit of 50 ticks:
Small ranges followed by significant displacement are impacted the most with the number of levels plotted. You may hide projections when configuring the minimum ticks.
A fixed standard deviation will plot levels in both directions, regardless of the price range. Here, we plot up to 3.0 which hiding projections for small ranges:
🟪 Legal Disclaimer
This indicator is provided for informational and educational purposes only. It is not financial advice, and should not be construed as a recommendation to buy or sell any financial instrument. Trading involves substantial risk, and you could lose a significant amount of money. Past performance is not indicative of future results. Always consult with a qualified financial professional before making any trading or investment decisions. The creators and distributors of this indicator assume no responsibility for your trading outcomes.
Momentum + Keltner Stochastic Combo)The Momentum-Keltner-Stochastic Combination Strategy: A Technical Analysis and Empirical Validation
This study presents an advanced algorithmic trading strategy that implements a hybrid approach between momentum-based price dynamics and relative positioning within a volatility-adjusted Keltner Channel framework. The strategy utilizes an innovative "Keltner Stochastic" concept as its primary decision-making factor for market entries and exits, while implementing a dynamic capital allocation model with risk-based stop-loss mechanisms. Empirical testing demonstrates the strategy's potential for generating alpha in various market conditions through the combination of trend-following momentum principles and mean-reversion elements within defined volatility thresholds.
1. Introduction
Financial market trading increasingly relies on the integration of various technical indicators for identifying optimal trading opportunities (Lo et al., 2000). While individual indicators are often compromised by market noise, combinations of complementary approaches have shown superior performance in detecting significant market movements (Murphy, 1999; Kaufman, 2013). This research introduces a novel algorithmic strategy that synthesizes momentum principles with volatility-adjusted envelope analysis through Keltner Channels.
2. Theoretical Foundation
2.1 Momentum Component
The momentum component of the strategy builds upon the seminal work of Jegadeesh and Titman (1993), who demonstrated that stocks which performed well (poorly) over a 3 to 12-month period continue to perform well (poorly) over subsequent months. As Moskowitz et al. (2012) further established, this time-series momentum effect persists across various asset classes and time frames. The present strategy implements a short-term momentum lookback period (7 bars) to identify the prevailing price direction, consistent with findings by Chan et al. (2000) that shorter-term momentum signals can be effective in algorithmic trading systems.
2.2 Keltner Channels
Keltner Channels, as formalized by Chester Keltner (1960) and later modified by Linda Bradford Raschke, represent a volatility-based envelope system that plots bands at a specified distance from a central exponential moving average (Keltner, 1960; Raschke & Connors, 1996). Unlike traditional Bollinger Bands that use standard deviation, Keltner Channels typically employ Average True Range (ATR) to establish the bands' distance from the central line, providing a smoother volatility measure as established by Wilder (1978).
2.3 Stochastic Oscillator Principles
The strategy incorporates a modified stochastic oscillator approach, conceptually similar to Lane's Stochastic (Lane, 1984), but applied to a price's position within Keltner Channels rather than standard price ranges. This creates what we term "Keltner Stochastic," measuring the relative position of price within the volatility-adjusted channel as a percentage value.
3. Strategy Methodology
3.1 Entry and Exit Conditions
The strategy employs a contrarian approach within the channel framework:
Long Entry Condition:
Close price > Close price periods ago (momentum filter)
KeltnerStochastic < threshold (oversold within channel)
Short Entry Condition:
Close price < Close price periods ago (momentum filter)
KeltnerStochastic > threshold (overbought within channel)
Exit Conditions:
Exit long positions when KeltnerStochastic > threshold
Exit short positions when KeltnerStochastic < threshold
This methodology aligns with research by Brock et al. (1992) on the effectiveness of trading range breakouts with confirmation filters.
3.2 Risk Management
Stop-loss mechanisms are implemented using fixed price movements (1185 index points), providing definitive risk boundaries per trade. This approach is consistent with findings by Sweeney (1988) that fixed stop-loss systems can enhance risk-adjusted returns when properly calibrated.
3.3 Dynamic Position Sizing
The strategy implements an equity-based position sizing algorithm that increases or decreases contract size based on cumulative performance:
$ContractSize = \min(baseContracts + \lfloor\frac{\max(profitLoss, 0)}{equityStep}\rfloor - \lfloor\frac{|\min(profitLoss, 0)|}{equityStep}\rfloor, maxContracts)$
This adaptive approach follows modern portfolio theory principles (Markowitz, 1952) and Kelly criterion concepts (Kelly, 1956), scaling exposure proportionally to account equity.
4. Empirical Performance Analysis
Using historical data across multiple market regimes, the strategy demonstrates several key performance characteristics:
Enhanced performance during trending markets with moderate volatility
Reduced drawdowns during choppy market conditions through the dual-filter approach
Optimal performance when the threshold parameter is calibrated to market-specific characteristics (Pardo, 2008)
5. Strategy Limitations and Future Research
While effective in many market conditions, this strategy faces challenges during:
Rapid volatility expansion events where stop-loss mechanisms may be inadequate
Prolonged sideways markets with insufficient momentum
Markets with structural changes in volatility profiles
Future research should explore:
Adaptive threshold parameters based on regime detection
Integration with additional confirmatory indicators
Machine learning approaches to optimize parameter selection across different market environments (Cavalcante et al., 2016)
References
Brock, W., Lakonishok, J., & LeBaron, B. (1992). Simple technical trading rules and the stochastic properties of stock returns. The Journal of Finance, 47(5), 1731-1764.
Cavalcante, R. C., Brasileiro, R. C., Souza, V. L., Nobrega, J. P., & Oliveira, A. L. (2016). Computational intelligence and financial markets: A survey and future directions. Expert Systems with Applications, 55, 194-211.
Chan, L. K. C., Jegadeesh, N., & Lakonishok, J. (2000). Momentum strategies. The Journal of Finance, 51(5), 1681-1713.
Jegadeesh, N., & Titman, S. (1993). Returns to buying winners and selling losers: Implications for stock market efficiency. The Journal of Finance, 48(1), 65-91.
Kaufman, P. J. (2013). Trading systems and methods (5th ed.). John Wiley & Sons.
Kelly, J. L. (1956). A new interpretation of information rate. The Bell System Technical Journal, 35(4), 917-926.
Keltner, C. W. (1960). How to make money in commodities. The Keltner Statistical Service.
Lane, G. C. (1984). Lane's stochastics. Technical Analysis of Stocks & Commodities, 2(3), 87-90.
Lo, A. W., Mamaysky, H., & Wang, J. (2000). Foundations of technical analysis: Computational algorithms, statistical inference, and empirical implementation. The Journal of Finance, 55(4), 1705-1765.
Markowitz, H. (1952). Portfolio selection. The Journal of Finance, 7(1), 77-91.
Moskowitz, T. J., Ooi, Y. H., & Pedersen, L. H. (2012). Time series momentum. Journal of Financial Economics, 104(2), 228-250.
Murphy, J. J. (1999). Technical analysis of the financial markets: A comprehensive guide to trading methods and applications. New York Institute of Finance.
Pardo, R. (2008). The evaluation and optimization of trading strategies (2nd ed.). John Wiley & Sons.
Raschke, L. B., & Connors, L. A. (1996). Street smarts: High probability short-term trading strategies. M. Gordon Publishing Group.
Sweeney, R. J. (1988). Some new filter rule tests: Methods and results. Journal of Financial and Quantitative Analysis, 23(3), 285-300.
Wilder, J. W. (1978). New concepts in technical trading systems. Trend Research.
Prediction Based on Linreg & Atr
We created this algorithm with the goal of predicting future prices 📊, specifically where the value of any asset will go in the next 20 periods ⏳. It uses linear regression based on past prices, calculating a slope and an intercept to forecast future behavior 🔮. This prediction is then adjusted according to market volatility, measured by the ATR 📉, and the direction of trend signals, which are based on the MACD and moving averages 📈.
How Does the Linreg & ATR Prediction Work?
1. Trend Calculation and Signals:
o Technical Indicators: We use short- and long-term exponential moving averages (EMA), RSI, MACD, and Bollinger Bands 📊 to assess market direction and sentiment (not visually presented in the script).
o Calculation Functions: These include functions to calculate slope, average, intercept, standard deviation, and Pearson's R, which are crucial for regression analysis 📉.
2. Predicting Future Prices:
o Linear Regression: The algorithm calculates the slope, average, and intercept of past prices to create a regression channel 📈, helping to predict the range of future prices 🔮.
o Standard Deviation and Pearson's R: These metrics determine the strength of the regression 🔍.
3. Adjusting the Prediction:
o The predicted value is adjusted by considering market volatility (ATR 📉) and the direction of trend signals 🔮, ensuring that the prediction is aligned with the current market environment 🌍.
4. Visualization:
o Prediction Lines and Bands: The algorithm plots lines that display the predicted future price along with a prediction range (upper and lower bounds) 📉📈.
5. EMA Cross Signals:
o EMA Conditions and Total Score: A bullish crossover signal is generated when the total score is positive and the short EMA crosses above the long EMA 📈. A bearish crossover signal is generated when the total score is negative and the short EMA crosses below the long EMA 📉.
6. Additional Considerations:
o Multi-Timeframe Regression Channel: The script calculates regression channels for different timeframes (5m, 15m, 30m, 4h) ⏳, helping determine the overall market direction 📊 (not visually presented).
Confidence Interpretation:
• High Confidence (close to 100%): Indicates strong alignment between timeframes with a clear trend (bullish or bearish) 🔥.
• Low Confidence (close to 0%): Shows disagreement or weak signals between timeframes ⚠️.
Confidence complements the interpretation of the prediction range and expected direction 🔮, aiding in decision-making for market entry or exit 🚀.
Español
Creamos este algoritmo con el objetivo de predecir los precios futuros 📊, específicamente hacia dónde irá el valor de cualquier activo en los próximos 20 períodos ⏳. Utiliza regresión lineal basada en los precios pasados, calculando una pendiente y una intersección para prever el comportamiento futuro 🔮. Esta predicción se ajusta según la volatilidad del mercado, medida por el ATR 📉, y la dirección de las señales de tendencia, que se basan en el MACD y las medias móviles 📈.
¿Cómo Funciona la Predicción con Linreg & ATR?
Cálculo de Tendencias y Señales:
Indicadores Técnicos: Usamos medias móviles exponenciales (EMA) a corto y largo plazo, RSI, MACD y Bandas de Bollinger 📊 para evaluar la dirección y el sentimiento del mercado (no presentados visualmente en el script).
Funciones de Cálculo: Incluye funciones para calcular pendiente, media, intersección, desviación estándar y el coeficiente de correlación de Pearson, esenciales para el análisis de regresión 📉.
Predicción de Precios Futuros:
Regresión Lineal: El algoritmo calcula la pendiente, la media y la intersección de los precios pasados para crear un canal de regresión 📈, ayudando a predecir el rango de precios futuros 🔮.
Desviación Estándar y Pearson's R: Estas métricas determinan la fuerza de la regresión 🔍.
Ajuste de la Predicción:
El valor predicho se ajusta considerando la volatilidad del mercado (ATR 📉) y la dirección de las señales de tendencia 🔮, asegurando que la predicción esté alineada con el entorno actual del mercado 🌍.
Visualización:
Líneas y Bandas de Predicción: El algoritmo traza líneas que muestran el precio futuro predicho, junto con un rango de predicción (límites superior e inferior) 📉📈.
Señales de Cruce de EMAs:
Condiciones de EMAs y Puntaje Total: Se genera una señal de cruce alcista cuando el puntaje total es positivo y la EMA corta cruza por encima de la EMA larga 📈. Se genera una señal de cruce bajista cuando el puntaje total es negativo y la EMA corta cruza por debajo de la EMA larga 📉.
Consideraciones Adicionales:
Canal de Regresión Multi-Timeframe: El script calcula canales de regresión para diferentes marcos de tiempo (5m, 15m, 30m, 4h) ⏳, ayudando a determinar la dirección general del mercado 📊 (no presentado visualmente).
Interpretación de la Confianza:
Alta Confianza (cerca del 100%): Indica una fuerte alineación entre los marcos temporales con una tendencia clara (alcista o bajista) 🔥.
Baja Confianza (cerca del 0%): Muestra desacuerdo o señales débiles entre los marcos temporales ⚠️.
La confianza complementa la interpretación del rango de predicción y la dirección esperada 🔮, ayudando en las decisiones de entrada o salida en el mercado 🚀.
RSI with Swing Trade by Kelvin_VAlgorithm Description: "RSI with Swing Trade by Kelvin_V"
1. Introduction:
This algorithm uses the RSI (Relative Strength Index) and optional Moving Averages (MA) to detect potential uptrends and downtrends in the market. The key feature of this script is that it visually changes the candle colors based on the market conditions, making it easier for users to identify potential trend swings or wave patterns.
The strategy offers flexibility by allowing users to enable or disable the MA condition. When the MA condition is enabled, the strategy will confirm trends using two moving averages. When disabled, the strategy will only use RSI to detect potential market swings.
2. Key Features of the Algorithm:
RSI (Relative Strength Index):
The RSI is used to identify potential market turning points based on overbought and oversold conditions.
When the RSI exceeds a predefined upper threshold (e.g., 60), it suggests a potential uptrend.
When the RSI drops below a lower threshold (e.g., 40), it suggests a potential downtrend.
Moving Averages (MA) - Optional:
Two Moving Averages (Short MA and Long MA) are used to confirm trends.
If the Short MA crosses above the Long MA, it indicates an uptrend.
If the Short MA crosses below the Long MA, it indicates a downtrend.
Users have the option to enable or disable this MA condition.
Visual Candle Coloring:
Green candles represent a potential uptrend, indicating a bullish move based on RSI (and MA if enabled).
Red candles represent a potential downtrend, indicating a bearish move based on RSI (and MA if enabled).
3. How the Algorithm Works:
RSI Levels:
The user can set RSI upper and lower bands to represent potential overbought and oversold levels. For example:
RSI > 60: Indicates a potential uptrend (bullish move).
RSI < 40: Indicates a potential downtrend (bearish move).
Optional MA Condition:
The algorithm also allows the user to apply the MA condition to further confirm the trend:
Short MA > Long MA: Confirms an uptrend, reinforcing a bullish signal.
Short MA < Long MA: Confirms a downtrend, reinforcing a bearish signal.
This condition can be disabled, allowing the user to focus solely on RSI signals if desired.
Swing Trade Logic:
Uptrend: If the RSI exceeds the upper threshold (e.g., 60) and (optionally) the Short MA is above the Long MA, the candles will turn green to signal a potential uptrend.
Downtrend: If the RSI falls below the lower threshold (e.g., 40) and (optionally) the Short MA is below the Long MA, the candles will turn red to signal a potential downtrend.
Visual Representation:
The candle colors change dynamically based on the RSI values and moving average conditions, making it easier for traders to visually identify potential trend swings or wave patterns without relying on complex chart analysis.
4. User Customization:
The algorithm provides multiple customization options:
RSI Length: Users can adjust the period for RSI calculation (default is 4).
RSI Upper Band (Potential Uptrend): Users can customize the upper RSI level (default is 60) to indicate a potential bullish move.
RSI Lower Band (Potential Downtrend): Users can customize the lower RSI level (default is 40) to indicate a potential bearish move.
MA Type: Users can choose between SMA (Simple Moving Average) and EMA (Exponential Moving Average) for moving average calculations.
Enable/Disable MA Condition: Users can toggle the MA condition on or off, depending on whether they want to add moving averages to the trend confirmation process.
5. Benefits of the Algorithm:
Easy Identification of Trends: By changing candle colors based on RSI and MA conditions, the algorithm makes it easy for users to visually detect potential trend reversals and trend swings.
Flexible Conditions: The user has full control over the RSI and MA settings, allowing them to adapt the strategy to different market conditions and timeframes.
Clear Visualization: With the candle color changes, users can quickly recognize when a potential uptrend or downtrend is forming, enabling faster decision-making in their trading.
6. Example Usage:
Day traders: Can apply this strategy on short timeframes such as 5 minutes or 15 minutes to detect quick trends or reversals.
Swing traders: Can use this strategy on longer timeframes like 1 hour or 4 hours to identify and follow larger market swings.
Intramarket Difference Index StrategyHi Traders !!
The IDI Strategy:
In layman’s terms this strategy compares two indicators across markets and exploits their differences.
note: it is best the two markets are correlated as then we know we are trading a short to long term deviation from both markets' general trend with the assumption both markets will trend again sometime in the future thereby exhausting our trading opportunity.
📍 Import Notes:
This Strategy calculates trade position size independently (i.e. risk per trade is controlled in the user inputs tab), this means that the ‘Order size’ input in the ‘Properties’ tab will have no effect on the strategy. Why ? because this allows us to define custom position size algorithms which we can use to improve our risk management and equity growth over time. Here we have the option to have fixed quantity or fixed percentage of equity ATR (Average True Range) based stops in addition to the turtle trading position size algorithm.
‘Pyramiding’ does not work for this strategy’, similar to the order size input togeling this input will have no effect on the strategy as the strategy explicitly defines the maximum order size to be 1.
This strategy is not perfect, and as of writing of this post I have not traded this algo.
Always take your time to backtests and debug the strategy.
🔷 The IDI Strategy:
By default this strategy pulls data from your current TV chart and then compares it to the base market, be default BINANCE:BTCUSD . The strategy pulls SMA and RSI data from either market (we call this the difference data), standardizes the data (solving the different unit problem across markets) such that it is comparable and then differentiates the data, calling the result of this transformation and difference the Intramarket Difference (ID). The formula for the the ID is
ID = market1_diff_data - market2_diff_data (1)
Where
market(i)_diff_data = diff_data / ATR(j)_market(i)^0.5,
where i = {1, 2} and j = the natural numbers excluding 0
Formula (1) interpretation is the following
When ID > 0: this means the current market outperforms the base market
When ID = 0: Markets are at long run equilibrium
When ID < 0: this means the current market underperforms the base market
To form the strategy we define one of two strategy type’s which are Trend and Mean Revesion respectively.
🔸 Trend Case:
Given the ‘‘Strategy Type’’ is equal to TREND we define a threshold for which if the ID crosses over we go long and if the ID crosses under the negative of the threshold we go short.
The motivating idea is that the ID is an indicator of the two symbols being out of sync, and given we know volatility clustering, momentum and mean reversion of anomalies to be a stylised fact of financial data we can construct a trading premise. Let's first talk more about this premise.
For some markets (cryptocurrency markets - synthetic symbols in TV) the stylised fact of momentum is true, this means that higher momentum is followed by higher momentum, and given we know momentum to be a vector quantity (with magnitude and direction) this momentum can be both positive and negative i.e. when the ID crosses above some threshold we make an assumption it will continue in that direction for some time before executing back to its long run equilibrium of 0 which is a reasonable assumption to make if the market are correlated. For example for the BTCUSD - ETHUSD pair, if the ID > +threshold (inputs for MA and RSI based ID thresholds are found under the ‘‘INTRAMARKET DIFFERENCE INDEX’’ group’), ETHUSD outperforms BTCUSD, we assume the momentum to continue so we go long ETHUSD.
In the standard case we would exit the market when the IDI returns to its long run equilibrium of 0 (for the positive case the ID may return to 0 because ETH’s difference data may have decreased or BTC’s difference data may have increased). However in this strategy we will not define this as our exit condition, why ?
This is because we want to ‘‘let our winners run’’, to achieve this we define a trailing Donchian Channel stop loss (along with a fixed ATR based stop as our volatility proxy). If we were too use the 0 exit the strategy may print a buy signal (ID > +threshold in the simple case, market regimes may be used), return to 0 and then print another buy signal, and this process can loop may times, this high trade frequency means we fail capture the entire market move lowering our profit, furthermore on lower time frames this high trade frequencies mean we pay more transaction costs (due to price slippage, commission and big-ask spread) which means less profit.
By capturing the sum of many momentum moves we are essentially following the trend hence the trend following strategy type.
Here we also print the IDI (with default strategy settings with the MA difference type), we can see that by letting our winners run we may catch many valid momentum moves, that results in a larger final pnl that if we would otherwise exit based on the equilibrium condition(Valid trades are denoted by solid green and red arrows respectively and all other valid trades which occur within the original signal are light green and red small arrows).
another example...
Note: if you would like to plot the IDI separately copy and paste the following code in a new Pine Script indicator template.
indicator("IDI")
// INTRAMARKET INDEX
var string g_idi = "intramarket diffirence index"
ui_index_1 = input.symbol("BINANCE:BTCUSD", title = "Base market", group = g_idi)
// ui_index_2 = input.symbol("BINANCE:ETHUSD", title = "Quote Market", group = g_idi)
type = input.string("MA", title = "Differrencing Series", options = , group = g_idi)
ui_ma_lkb = input.int(24, title = "lookback of ma and volatility scaling constant", group = g_idi)
ui_rsi_lkb = input.int(14, title = "Lookback of RSI", group = g_idi)
ui_atr_lkb = input.int(300, title = "ATR lookback - Normalising value", group = g_idi)
ui_ma_threshold = input.float(5, title = "Threshold of Upward/Downward Trend (MA)", group = g_idi)
ui_rsi_threshold = input.float(20, title = "Threshold of Upward/Downward Trend (RSI)", group = g_idi)
//>>+----------------------------------------------------------------+}
// CUSTOM FUNCTIONS |
//<<+----------------------------------------------------------------+{
// construct UDT (User defined type) containing the IDI (Intramarket Difference Index) source values
// UDT will hold many variables / functions grouped under the UDT
type functions
float Close // close price
float ma // ma of symbol
float rsi // rsi of the asset
float atr // atr of the asset
// the security data
getUDTdata(symbol, malookback, rsilookback, atrlookback) =>
indexHighTF = barstate.isrealtime ? 1 : 0
= request.security(symbol, timeframe = timeframe.period,
expression = [close , // Instentiate UDT variables
ta.sma(close, malookback) ,
ta.rsi(close, rsilookback) ,
ta.atr(atrlookback) ])
data = functions.new(close_, ma_, rsi_, atr_)
data
// Intramerket Difference Index
idi(type, symbol1, malookback, rsilookback, atrlookback, mathreshold, rsithreshold) =>
threshold = float(na)
index1 = getUDTdata(symbol1, malookback, rsilookback, atrlookback)
index2 = getUDTdata(syminfo.tickerid, malookback, rsilookback, atrlookback)
// declare difference variables for both base and quote symbols, conditional on which difference type is selected
var diffindex1 = 0.0, var diffindex2 = 0.0,
// declare Intramarket Difference Index based on series type, note
// if > 0, index 2 outpreforms index 1, buy index 2 (momentum based) until equalibrium
// if < 0, index 2 underpreforms index 1, sell index 1 (momentum based) until equalibrium
// for idi to be valid both series must be stationary and normalised so both series hae he same scale
intramarket_difference = 0.0
if type == "MA"
threshold := mathreshold
diffindex1 := (index1.Close - index1.ma) / math.pow(index1.atr*malookback, 0.5)
diffindex2 := (index2.Close - index2.ma) / math.pow(index2.atr*malookback, 0.5)
intramarket_difference := diffindex2 - diffindex1
else if type == "RSI"
threshold := rsilookback
diffindex1 := index1.rsi
diffindex2 := index2.rsi
intramarket_difference := diffindex2 - diffindex1
//>>+----------------------------------------------------------------+}
// STRATEGY FUNCTIONS CALLS |
//<<+----------------------------------------------------------------+{
// plot the intramarket difference
= idi(type,
ui_index_1,
ui_ma_lkb,
ui_rsi_lkb,
ui_atr_lkb,
ui_ma_threshold,
ui_rsi_threshold)
//>>+----------------------------------------------------------------+}
plot(intramarket_difference, color = color.orange)
hline(type == "MA" ? ui_ma_threshold : ui_rsi_threshold, color = color.green)
hline(type == "MA" ? -ui_ma_threshold : -ui_rsi_threshold, color = color.red)
hline(0)
Note it is possible that after printing a buy the strategy then prints many sell signals before returning to a buy, which again has the same implication (less profit. Potentially because we exit early only for price to continue upwards hence missing the larger "trend"). The image below showcases this cenario and again, by allowing our winner to run we may capture more profit (theoretically).
This should be clear...
🔸 Mean Reversion Case:
We stated prior that mean reversion of anomalies is an standerdies fact of financial data, how can we exploit this ?
We exploit this by normalizing the ID by applying the Ehlers fisher transformation. The transformed data is then assumed to be approximately normally distributed. To form the strategy we employ the same logic as for the z score, if the FT normalized ID > 2.5 (< -2.5) we buy (short). Our exit conditions remain unchanged (fixed ATR stop and trailing Donchian Trailing stop)
🔷 Position Sizing:
If ‘‘Fixed Risk From Initial Balance’’ is toggled true this means we risk a fixed percentage of our initial balance, if false we risk a fixed percentage of our equity (current balance).
Note we also employ a volatility adjusted position sizing formula, the turtle training method which is defined as follows.
Turtle position size = (1/ r * ATR * DV) * C
Where,
r = risk factor coefficient (default is 20)
ATR(j) = risk proxy, over j times steps
DV = Dollar Volatility, where DV = (1/Asset Price) * Capital at Risk
🔷 Risk Management:
Correct money management means we can limit risk and increase reward (theoretically). Here we employ
Max loss and gain per day
Max loss per trade
Max number of consecutive losing trades until trade skip
To read more see the tooltips (info circle).
🔷 Take Profit:
By defualt the script uses a Donchain Channel as a trailing stop and take profit, In addition to this the script defines a fixed ATR stop losses (by defualt, this covers cases where the DC range may be to wide making a fixed ATR stop usefull), ATR take profits however are defined but optional.
ATR SL and TP defined for all trades
🔷 Hurst Regime (Regime Filter):
The Hurst Exponent (H) aims to segment the market into three different states, Trending (H > 0.5), Random Geometric Brownian Motion (H = 0.5) and Mean Reverting / Contrarian (H < 0.5). In my interpretation this can be used as a trend filter that eliminates market noise.
We utilize the trending and mean reverting based states, as extra conditions required for valid trades for both strategy types respectively, in the process increasing our trade entry quality.
🔷 Example model Architecture:
Here is an example of one configuration of this strategy, combining all aspects discussed in this post.
Future Updates
- Automation integration (next update)
AI SuperTrend x Pivot Percentile - Strategy [PresentTrading]█ Introduction and How it is Different
The AI SuperTrend x Pivot Percentile strategy is a sophisticated trading approach that integrates AI-driven analysis with traditional technical indicators. Combining the AI SuperTrend with the Pivot Percentile strategy highlights several key advantages:
1. Enhanced Accuracy in Trend Prediction: The AI SuperTrend utilizes K-Nearest Neighbors (KNN) algorithm for trend prediction, improving accuracy by considering historical data patterns. This is complemented by the Pivot Percentile analysis which provides additional context on trend strength.
2. Comprehensive Market Analysis: The integration offers a multi-faceted approach to market analysis, combining AI insights with traditional technical indicators. This dual approach captures a broader range of market dynamics.
BTC 6H L/S Performance
Local
█ Strategy: How it Works - Detailed Explanation
🔶 AI-Enhanced SuperTrend Indicators
1. SuperTrend Calculation:
- The SuperTrend indicator is calculated using a moving average and the Average True Range (ATR). The basic formula is:
- Upper Band = Moving Average + (Multiplier × ATR)
- Lower Band = Moving Average - (Multiplier × ATR)
- The moving average type (SMA, EMA, WMA, RMA, VWMA) and the length of the moving average and ATR are adjustable parameters.
- The direction of the trend is determined based on the position of the closing price in relation to these bands.
2. AI Integration with K-Nearest Neighbors (KNN):
- The KNN algorithm is applied to predict trend direction. It uses historical price data and SuperTrend values to classify the current trend as bullish or bearish.
- The algorithm calculates the 'distance' between the current data point and historical points. The 'k' nearest data points (neighbors) are identified based on this distance.
- A weighted average of these neighbors' trends (bullish or bearish) is calculated to predict the current trend.
For more please check: Multi-TF AI SuperTrend with ADX - Strategy
🔶 Pivot Percentile Analysis
1. Percentile Calculation:
- This involves calculating the percentile ranks for high and low prices over a set of predefined lengths.
- The percentile function is typically defined as:
- Percentile = Value at (P/100) × (N + 1)th position
- Where P is the desired percentile, and N is the number of data points.
2. Trend Strength Evaluation:
- The calculated percentiles for highs and lows are used to determine the strength of bullish and bearish trends.
- For instance, a high percentile rank in the high prices may indicate a strong bullish trend, and vice versa for bearish trends.
For more please check: Pivot Percentile Trend - Strategy
🔶 Strategy Integration
1. Combining SuperTrend and Pivot Percentile:
- The strategy synthesizes the insights from both AI-enhanced SuperTrend and Pivot Percentile analysis.
- It compares the trend direction indicated by the SuperTrend with the strength of the trend as suggested by the Pivot Percentile analysis.
2. Signal Generation:
- A trading signal is generated when both the AI-enhanced SuperTrend and the Pivot Percentile analysis agree on the trend direction.
- For instance, a bullish signal is generated when both the SuperTrend is bullish, and the Pivot Percentile analysis shows strength in bullish trends.
🔶 Risk Management and Filters
- ADX and DMI Filter: The strategy uses the Average Directional Index (ADX) and the Directional Movement Index (DMI) as filters to assess the trend's strength and direction.
- Dynamic Trailing Stop Loss: Based on the SuperTrend indicator, the strategy dynamically adjusts stop-loss levels to manage risk effectively.
This strategy stands out for its ability to combine real-time AI analysis with established technical indicators, offering traders a nuanced and responsive tool for navigating complex market conditions. The equations and algorithms involved are pivotal in accurately identifying market trends and potential trade opportunities.
█ Usage
To effectively use this strategy, traders should:
1. Understand the AI and Pivot Percentile Indicators: A clear grasp of how these indicators work will enable traders to make informed decisions.
2. Interpret the Signals Accurately: The strategy provides bullish, bearish, and neutral signals. Traders should align these signals with their market analysis and trading goals.
3. Monitor Market Conditions: Given that this strategy is sensitive to market dynamics, continuous monitoring is crucial for timely decision-making.
4. Adjust Settings as Needed: Traders should feel free to tweak the input parameters to suit their trading preferences and to respond to changing market conditions.
█Default Settings and Their Impact on Performance
1. Trading Direction (Default: "Both")
Effect: Determines whether the strategy will take long positions, short positions, or both. Adjusting this setting can align the strategy with the trader's market outlook or risk preference.
2. AI Settings (Neighbors: 3, Data Points: 24)
Neighbors: The number of nearest neighbors in the KNN algorithm. A higher number might smooth out noise but could miss subtle, recent changes. A lower number makes the model more sensitive to recent data but may increase noise.
Data Points: Defines the amount of historical data considered. More data points provide a broader context but may dilute recent trends' impact.
3. SuperTrend Settings (Length: 10, Factor: 3.0, MA Source: "WMA")
Length: Affects the sensitivity of the SuperTrend indicator. A longer length results in a smoother, less sensitive indicator, ideal for long-term trends.
Factor: Determines the bandwidth of the SuperTrend. A higher factor creates wider bands, capturing larger price movements but potentially missing short-term signals.
MA Source: The type of moving average used (e.g., WMA - Weighted Moving Average). Different MA types can affect the trend indicator's responsiveness and smoothness.
4. AI Trend Prediction Settings (Price Trend: 10, Prediction Trend: 80)
Price Trend and Prediction Trend Lengths: These settings define the lengths of weighted moving averages for price and SuperTrend, impacting the responsiveness and smoothness of the AI's trend predictions.
5. Pivot Percentile Settings (Length: 10)
Length: Influences the calculation of pivot percentiles. A shorter length makes the percentile more responsive to recent price changes, while a longer length offers a broader view of price trends.
6. ADX and DMI Settings (ADX Length: 14, Time Frame: 'D')
ADX Length: Defines the period for the Average Directional Index calculation. A longer period results in a smoother ADX line.
Time Frame: Sets the time frame for the ADX and DMI calculations, affecting the sensitivity to market changes.
7. Commission, Slippage, and Initial Capital
These settings relate to transaction costs and initial investment, directly impacting net profitability and strategy feasibility.
HolidayLibrary "Holiday"
- Full Control over Holidays and Daylight Savings Time (DLS)
The Holiday Library is an essential tool for traders and analysts who engage in backtesting and live trading . This comprehensive library enables the incorporation of crucial calendar elements - specifically Daylight Savings Time (DLS) adjustments and public holidays - into trading strategies and backtesting environments.
Key Features:
- DLS Adjustments: The library takes into account the shifts in time due to Daylight Savings. This feature is particularly vital for backtesting strategies, as DLS can impact trading hours, which in turn affects the volatility and liquidity in the market. Accurate DLS adjustments ensure that backtesting scenarios are as close to real-life conditions as possible.
- Comprehensive Holiday Metadata: The library includes a rich set of holiday metadata, allowing for the detailed scheduling of trading activities around public holidays. This feature is crucial for avoiding skewed results in backtesting, where holiday trading sessions might differ significantly in terms of volume and price movement.
- Customizable Holiday Schedules: Users can add or remove specific holidays, tailoring the library to fit various regional market schedules or specific trading requirements.
- Visualization Aids: The library supports on-chart labels, making it visually intuitive to identify holidays and DLS shifts directly on trading charts.
Use Cases:
1. Strategy Development: When developing trading strategies, it’s important to account for non-trading days and altered trading hours due to holidays and DLS. This library enables a realistic and accurate representation of these factors.
2. Risk Management: Trading around holidays can be riskier due to thinner liquidity and greater volatility. By integrating holiday data, traders can better manage their risk exposure.
3. Backtesting Accuracy: For backtesting to be effective, it must simulate the actual market conditions as closely as possible. Incorporating holidays and DLS adjustments contributes to more reliable and realistic backtesting results.
4. Global Trading: For traders active in multiple global markets, this library provides an easy way to handle different holiday schedules and DLS shifts across regions.
The Holiday Library is a versatile tool that enhances the precision and realism of trading simulations and strategy development . Its integration into the trading workflow is straightforward and beneficial for both novice and experienced traders.
EasterAlgo(_year)
Calculates the date of Easter Sunday for a given year using the Anonymous Gregorian algorithm.
`Gauss Algorithm for Easter Sunday` was developed by the mathematician Carl Friedrich Gauss
This algorithm is based on the cycles of the moon and the fact that Easter always falls on the first Sunday after the first ecclesiastical full moon that occurs on or after March 21.
While it's not considered to be 100% accurate due to rare exceptions, it does give the correct date in most cases.
It's important to note that Gauss's formula has been found to be inaccurate for some 21st-century years in the Gregorian calendar. Specifically, the next suggested failure years are 2038, 2051.
This function can be used for Good Friday (Friday before Easter), Easter Sunday, and Easter Monday (following Monday).
en.wikipedia.org
Parameters:
_year (int) : `int` - The year for which to calculate the date of Easter Sunday. This should be a four-digit year (YYYY).
Returns: tuple - The month (1-12) and day (1-31) of Easter Sunday for the given year.
easterInit()
Inits the date of Easter Sunday and Good Friday for a given year.
Returns: tuple - The month (1-12) and day (1-31) of Easter Sunday and Good Friday for the given year.
isLeapYear(_year)
Determine if a year is a leap year.
Parameters:
_year (int) : `int` - 4 digit year to check => YYYY
Returns: `bool` - true if input year is a leap year
method timezoneHelper(utc)
Helper function to convert UTC time.
Namespace types: series int, simple int, input int, const int
Parameters:
utc (int) : `int` - UTC time shift in hours.
Returns: `string`- UTC time string with shift applied.
weekofmonth()
Function to find the week of the month of a given Unix Time.
Returns: number - The week of the month of the specified UTC time.
dayLightSavingsAdjustedUTC(utc, adjustForDLS)
dayLightSavingsAdjustedUTC
Parameters:
utc (int) : `int` - The normal UTC timestamp to be used for reference.
adjustForDLS (bool) : `bool` - Flag indicating whether to adjust for daylight savings time (DLS).
Returns: `int` - The adjusted UTC timestamp for the given normal UTC timestamp.
getDayOfYear(monthOfYear, dayOfMonth, weekOfMonth, dayOfWeek, lastOccurrenceInMonth, holiday)
Function gets the day of the year of a given holiday (1-366)
Parameters:
monthOfYear (int)
dayOfMonth (int)
weekOfMonth (int)
dayOfWeek (int)
lastOccurrenceInMonth (bool)
holiday (string)
Returns: `int` - The day of the year of the holiday 1-366.
method buildMap(holidayMap, holiday, monthOfYear, weekOfMonth, dayOfWeek, dayOfMonth, lastOccurrenceInMonth, closingTime)
Function to build the `holidaysMap`.
Namespace types: map
Parameters:
holidayMap (map) : `map` - The map of holidays.
holiday (string) : `string` - The name of the holiday.
monthOfYear (int) : `int` - The month of the year of the holiday.
weekOfMonth (int) : `int` - The week of the month of the holiday.
dayOfWeek (int) : `int` - The day of the week of the holiday.
dayOfMonth (int) : `int` - The day of the month of the holiday.
lastOccurrenceInMonth (bool) : `bool` - Flag indicating whether the holiday is the last occurrence of the day in the month.
closingTime (int) : `int` - The closing time of the holiday.
Returns: `map` - The updated map of holidays
holidayInit(addHolidaysArray, removeHolidaysArray, defaultHolidays)
Initializes a HolidayStorage object with predefined US holidays.
Parameters:
addHolidaysArray (array) : `array` - The array of additional holidays to be added.
removeHolidaysArray (array) : `array` - The array of holidays to be removed.
defaultHolidays (bool) : `bool` - Flag indicating whether to include the default holidays.
Returns: `map` - The map of holidays.
Holidays(utc, addHolidaysArray, removeHolidaysArray, adjustForDLS, displayLabel, defaultHolidays)
Main function to build the holidays object, this is the only function from this library that should be needed. \
all functionality should be available through this function. \
With the exception of initializing a `HolidayMetaData` object to add a holiday or early close. \
\
**Default Holidays:** \
`DLS begin`, `DLS end`, `New Year's Day`, `MLK Jr. Day`, \
`Washington Day`, `Memorial Day`, `Independence Day`, `Labor Day`, \
`Columbus Day`, `Veterans Day`, `Thanksgiving Day`, `Christmas Day` \
\
**Example**
```
HolidayMetaData valentinesDay = HolidayMetaData.new(holiday="Valentine's Day", monthOfYear=2, dayOfMonth=14)
HolidayMetaData stPatricksDay = HolidayMetaData.new(holiday="St. Patrick's Day", monthOfYear=3, dayOfMonth=17)
HolidayMetaData addHolidaysArray = array.from(valentinesDay, stPatricksDay)
string removeHolidaysArray = array.from("DLS begin", "DLS end")
܂Holidays = Holidays(
܂ utc=-6,
܂ addHolidaysArray=addHolidaysArray,
܂ removeHolidaysArray=removeHolidaysArray,
܂ adjustForDLS=true,
܂ displayLabel=true,
܂ defaultHolidays=true,
܂ )
plot(Holidays.newHoliday ? open : na, title="newHoliday", color=color.red, linewidth=4, style=plot.style_circles)
```
Parameters:
utc (int) : `int` - The UTC time shift in hours
addHolidaysArray (array) : `array` - The array of additional holidays to be added
removeHolidaysArray (array) : `array` - The array of holidays to be removed
adjustForDLS (bool) : `bool` - Flag indicating whether to adjust for daylight savings time (DLS)
displayLabel (bool) : `bool` - Flag indicating whether to display a label on the chart
defaultHolidays (bool) : `bool` - Flag indicating whether to include the default holidays
Returns: `HolidayObject` - The holidays object | Holidays = (holidaysMap: map, newHoliday: bool, holiday: string, dayString: string)
HolidayMetaData
HolidayMetaData
Fields:
holiday (series string) : `string` - The name of the holiday.
dayOfYear (series int) : `int` - The day of the year of the holiday.
monthOfYear (series int) : `int` - The month of the year of the holiday.
dayOfMonth (series int) : `int` - The day of the month of the holiday.
weekOfMonth (series int) : `int` - The week of the month of the holiday.
dayOfWeek (series int) : `int` - The day of the week of the holiday.
lastOccurrenceInMonth (series bool)
closingTime (series int) : `int` - The closing time of the holiday.
utc (series int) : `int` - The UTC time shift in hours.
HolidayObject
HolidayObject
Fields:
holidaysMap (map) : `map` - The map of holidays.
newHoliday (series bool) : `bool` - Flag indicating whether today is a new holiday.
activeHoliday (series bool) : `bool` - Flag indicating whether today is an active holiday.
holiday (series string) : `string` - The name of the holiday.
dayString (series string) : `string` - The day of the week of the holiday.
Volume ForecastThe Volume Forecast indicator on TradingView is a comprehensive tool designed to analyze historical price action and project future market movements based on the average sizes of candles. Incorporating various data points such as candle high/low, open/close, and real volumes, Volume Forecast provides traders with a holistic view of market dynamics, allowing for more informed decision-making.
Key Features:
Multi-Data Source Analysis:
Volume Forecast seamlessly integrates multiple data sources, including candle high/low, open/close prices, and real volumes. By considering these diverse elements, the indicator offers a nuanced understanding of market conditions.
Historical Candle Size Analysis:
The indicator conducts a thorough analysis of historical candle sizes, capturing key data points to calculate the average candle size over a specified period. This historical context serves as the foundation for forecasting future candle sizes.
Customizable Forecasting Parameters:
Traders have the flexibility to fine-tune forecasting parameters to align with their trading strategies. Whether focusing on open/close relationships, high/low points, or real volumes, users can customize the indicator to suit their preferences.
Predictive Algorithm:
Volume Forecast employs a sophisticated predictive algorithm that leverages historical candle size data to project the potential size of upcoming candles. This algorithmic approach enhances the indicator's accuracy in forecasting market movements.
Visual Clarity:
The indicator provides a clear visual representation on the TradingView chart, displaying historical candle sizes and forecasted values. Color-coded elements and visual cues help traders quickly interpret the data, facilitating timely decision-making.
Adaptive Real-Time Updates:
Volume Forecast dynamically updates in real-time, ensuring traders have access to the latest information. This adaptability allows for swift adjustments to trading strategies in response to changing market conditions.
Comprehensive Market Compatibility:
Whether trading stocks, forex, cryptocurrencies, or commodities, Volume Forecast is compatible across various financial instruments and timeframes. This versatility makes it a valuable asset for traders in different markets.
User-Friendly Interface:
With an intuitive interface, Volume Forecast is accessible to traders of all experience levels. The indicator's user-friendly design streamlines the analysis process, making it easier for traders to incorporate it into their trading routines.
In summary, Volume Forecast is a robust TradingView indicator that combines historical candle size analysis with advanced forecasting techniques. By incorporating multiple data sources and offering customization options, it empowers traders to make more informed decisions in anticipation of market movements. Whether used independently or in conjunction with other tools, Volume Forecast is a valuable asset for traders seeking a comprehensive understanding of market dynamics.
Grid by Volatility (Expo)█ Overview
The Grid by Volatility is designed to provide a dynamic grid overlay on your price chart. This grid is calculated based on the volatility and adjusts in real-time as market conditions change. The indicator uses Standard Deviation to determine volatility and is useful for traders looking to understand price volatility patterns, determine potential support and resistance levels, or validate other trading signals.
█ How It Works
The indicator initiates its computations by assessing the market volatility through an established statistical model: the Standard Deviation. Following the volatility determination, the algorithm calculates a central equilibrium line—commonly referred to as the "mid-line"—on the chart to serve as a baseline for additional computations. Subsequently, upper and lower grid lines are algorithmically generated and plotted equidistantly from the central mid-line, with the distance being dictated by the previously calculated volatility metrics.
█ How to Use
Trend Analysis: The grid can be used to analyze the underlying trend of the asset. For example, if the price is above the Average Line and moves toward the Upper Range, it indicates a strong bullish trend.
Support and Resistance: The grid lines can act as dynamic support and resistance levels. Price tends to bounce off these levels or breakthrough, providing potential trade opportunities.
Volatility Gauge: The distance between the grid lines serves as a measure of market volatility. Wider lines indicate higher volatility, while narrower lines suggest low volatility.
█ Settings
Volatility Length: Number of bars to calculate the Standard Deviation (Default: 200)
Squeeze Adjustment: Multiplier for the Standard Deviation (Default: 6)
Grid Confirmation Length: Number of bars to calculate the weighted moving average for smoothing the grid lines (Default: 2)
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Channel Based Zigzag [HeWhoMustNotBeNamed]🎲 Concept
Zigzag is built based on the price and number of offset bars. But, in this experiment, we build zigzag based on different bands such as Bollinger Band, Keltner Channel and Donchian Channel. The process is simple:
🎯 Derive bands based on input parameters
🎯 High of a bar is considered as pivot high only if the high price is above or equal to upper band.
🎯 Similarly low of a bar is considered as pivot low only if low price is below or equal to lower band.
🎯 Adding the pivot high/low follows same logic as that of regular zigzag where pivot high is always followed by pivot low and vice versa.
🎯 If the new pivot added is of same direction as that of last pivot, then both pivots are compared with each other and only the extreme one is kept. (Highest in case of pivot high and lowest in case of pivot low)
🎯 If a bar has both pivot high and pivot low - pivot with same direction as previous pivot is added to the list first before adding the pivot with opposite direction.
🎲 Use Cases
Can be used for pattern recognition algorithms instead of standard zigzag. This will help derive patterns which are relative to bands and channels.
Example: John Bollinger explains how to manually scan double tap using Bollinger Bands in this video: www.youtube.com This modified zigzag base can be used to achieve the same using algorithmic means.
🎲 Settings
Few simple configurations which will let you select the band properties. Notice that there is no zigzag length here. All the calculations depend on the bands.
With bands display, indicator looks something like this
Note that pivots do not always represent highest/lowest prices. They represent highest/lowest price relative to bands.
As mentioned many times, application of zigzag is not for buying at lower price and selling at higher price. It is mainly used for pattern recognition either manually or via algorithms. Lets build new Harmonic, Chart patterns, Trend Lines using the new zigzag?
Machine Learning: kNN (New Approach)Description:
kNN is a very robust and simple method for data classification and prediction. It is very effective if the training data is large. However, it is distinguished by difficulty at determining its main parameter, K (a number of nearest neighbors), beforehand. The computation cost is also quite high because we need to compute distance of each instance to all training samples. Nevertheless, in algorithmic trading KNN is reported to perform on a par with such techniques as SVM and Random Forest. It is also widely used in the area of data science.
The input data is just a long series of prices over time without any particular features. The value to be predicted is just the next bar's price. The way that this problem is solved for both nearest neighbor techniques and for some other types of prediction algorithms is to create training records by taking, for instance, 10 consecutive prices and using the first 9 as predictor values and the 10th as the prediction value. Doing this way, given 100 data points in your time series you could create 10 different training records. It's possible to create even more training records than 10 by creating a new record starting at every data point. For instance, you could take the first 10 data points and create a record. Then you could take the 10 consecutive data points starting at the second data point, the 10 consecutive data points starting at the third data point, etc.
By default, shown are only 10 initial data points as predictor values and the 6th as the prediction value.
Here is a step-by-step workthrough on how to compute K nearest neighbors (KNN) algorithm for quantitative data:
1. Determine parameter K = number of nearest neighbors.
2. Calculate the distance between the instance and all the training samples. As we are dealing with one-dimensional distance, we simply take absolute value from the instance to value of x (| x – v |).
3. Rank the distance and determine nearest neighbors based on the K'th minimum distance.
4. Gather the values of the nearest neighbors.
5. Use average of nearest neighbors as the prediction value of the instance.
The original logic of the algorithm was slightly modified, and as a result at approx. N=17 the resulting curve nicely approximates that of the sma(20). See the description below. Beside the sma-like MA this algorithm also gives you a hint on the direction of the next bar move.
Cycles StrategyThis is back-testable strategy is a modified version of the Stochastic strategy. It is meant to accompany the modified Stochastic indicator: "Cycles".
Modifications to the Stochastic strategy include;
1. Programmable settings for the Stochastic Periods (%K, %D and Smooth %K).
2. Programmable settings for the MACD Periods (Fast, Slow, Smoothing)
3. Programmable thresholds for %K, to qualify a potential entry strategy.
4. Programmable thresholds for %D, to qualify a potential exit strategy.
5. Buttons to choose which components to use in the trading algorithm.
6. Choose the month and year to back test.
The trading algorithm:
1. When %K exceeds the upper/lower threshold and then hooks down/up, in the direction of the Moving Average (MA). This is the minimum entry qualification.
2. When %D exceeds the lower/upper threshold and angled in the direction of the trade, is the exit qualification.
3. Additional entry filters include the direction of MACD, Signal and %D. Also, the "cliff", being a long entry is a higher high or a short entry is a lower low.
4. Strategy can only go "Long" or "Short" depending on the selected setting.
5. By matching the settings in the "Cycles" indicator, you can (almost) see what the strategy is doing.
6. Be sure to select the "Recalculate" buttons, to recalculate on every new Tick, for best results.
Please click the Like button and leave a comment if you appreciate this script. Improvements will be implemented as time goes on.
I am not a licensed trade advisor. This strategy is for entertainment only. Use at your own risk!
MLExtensions_CoreLibrary "MLExtensions_Core"
A set of extension methods for a novel implementation of a Approximate Nearest Neighbors (ANN) algorithm in Lorentzian space, focused on computation.
normalizeDeriv(src, quadraticMeanLength)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src (float) : The input series (i.e., the first-order derivative for price).
quadraticMeanLength (int) : The length of the quadratic mean (RMS).
Returns: nDeriv The normalized derivative of the input series.
normalize(src, min, max)
Rescales a source value with an unbounded range to a target range.
Parameters:
src (float) : The input series
min (float) : The minimum value of the unbounded range
max (float) : The maximum value of the unbounded range
Returns: The normalized series
rescale(src, oldMin, oldMax, newMin, newMax)
Rescales a source value with a bounded range to anther bounded range
Parameters:
src (float) : The input series
oldMin (float) : The minimum value of the range to rescale from
oldMax (float) : The maximum value of the range to rescale from
newMin (float) : The minimum value of the range to rescale to
newMax (float) : The maximum value of the range to rescale to
Returns: The rescaled series
getColorShades(color)
Creates an array of colors with varying shades of the input color
Parameters:
color (color) : The color to create shades of
Returns: An array of colors with varying shades of the input color
getPredictionColor(prediction, neighborsCount, shadesArr)
Determines the color shade based on prediction percentile
Parameters:
prediction (float) : Value of the prediction
neighborsCount (int) : The number of neighbors used in a nearest neighbors classification
shadesArr (array) : An array of colors with varying shades of the input color
Returns: shade Color shade based on prediction percentile
color_green(prediction)
Assigns varying shades of the color green based on the KNN classification
Parameters:
prediction (float) : Value (int|float) of the prediction
Returns: color
color_red(prediction)
Assigns varying shades of the color red based on the KNN classification
Parameters:
prediction (float) : Value of the prediction
Returns: color
tanh(src)
Returns the the hyperbolic tangent of the input series. The sigmoid-like hyperbolic tangent function is used to compress the input to a value between -1 and 1.
Parameters:
src (float) : The input series (i.e., the normalized derivative).
Returns: tanh The hyperbolic tangent of the input series.
dualPoleFilter(src, lookback)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src (float) : The input series (i.e., the hyperbolic tangent).
lookback (int) : The lookback window for the smoothing.
Returns: filter The smoothed hyperbolic tangent of the input series.
tanhTransform(src, smoothingFrequency, quadraticMeanLength)
Returns the tanh transform of the input series.
Parameters:
src (float) : The input series (i.e., the result of the tanh calculation).
smoothingFrequency (int)
quadraticMeanLength (int)
Returns: signal The smoothed hyperbolic tangent transform of the input series.
n_rsi(src, n1, n2)
Returns the normalized RSI ideal for use in ML algorithms.
Parameters:
src (float) : The input series (i.e., the result of the RSI calculation).
n1 (simple int) : The length of the RSI.
n2 (simple int) : The smoothing length of the RSI.
Returns: signal The normalized RSI.
n_cci(src, n1, n2)
Returns the normalized CCI ideal for use in ML algorithms.
Parameters:
src (float) : The input series (i.e., the result of the CCI calculation).
n1 (simple int) : The length of the CCI.
n2 (simple int) : The smoothing length of the CCI.
Returns: signal The normalized CCI.
n_wt(src, n1, n2)
Returns the normalized WaveTrend Classic series ideal for use in ML algorithms.
Parameters:
src (float) : The input series (i.e., the result of the WaveTrend Classic calculation).
n1 (simple int)
n2 (simple int)
Returns: signal The normalized WaveTrend Classic series.
n_adx(highSrc, lowSrc, closeSrc, n1)
Returns the normalized ADX ideal for use in ML algorithms.
Parameters:
highSrc (float) : The input series for the high price.
lowSrc (float) : The input series for the low price.
closeSrc (float) : The input series for the close price.
n1 (simple int) : The length of the ADX.
regime_filter(src, threshold, useRegimeFilter)
Parameters:
src (float)
threshold (float)
useRegimeFilter (bool)
filter_adx(src, length, adxThreshold, useAdxFilter)
filter_adx
Parameters:
src (float) : The source series.
length (simple int) : The length of the ADX.
adxThreshold (int) : The ADX threshold.
useAdxFilter (bool) : Whether to use the ADX filter.
Returns: The ADX.
filter_volatility(minLength, maxLength, sensitivityMultiplier, useVolatilityFilter)
filter_volatility
Parameters:
minLength (simple int) : The minimum length of the ATR.
maxLength (simple int) : The maximum length of the ATR.
sensitivityMultiplier (float) : Multiplier for the historical ATR to control sensitivity.
useVolatilityFilter (bool) : Whether to use the volatility filter.
Returns: Boolean indicating whether or not to let the signal pass through the filter.
EXODUS EXODUS by (DAFE) Trading Systems
EXODUS is a sophisticated trading algorithm built by Dskyz (DAFE) Trading Systems for competitive and competition purposes, designed to identify high-probability trades with robust risk management. this strategy leverages a multi-signal voting system, combining three core components—SPR, VWMO, and VEI—alongside ADX, choppiness filters, and ATR-based volatility gates to ensure trades are taken only in favorable market conditions. the algo uses a take-profit to stop-loss ratio, dynamic position sizing, and a strict voting mechanism requiring all signals to align before entering a trade.
EXODUS was not overfitted for any specific symbol. instead, it uses a generic tuned setting, making it versatile across various markets. while it can trade futures, it’s not currently set up for it but has the potential to do more with further development. visuals are intentionally minimal due to its competition focus, prioritizing performance over aesthetics. a more visually stunning version may be released in the future with enhanced graphics.
The Unique Core Components Developed for EXODUS
SPR (Session Price Recalibration)
SPR measures momentum during regular trading hours (RTH, 0930-1600, America/New_York) to catch session-specific trends.
spr_lookback = input.int(15, "SPR Lookback") this sets how many bars back SPR looks to calculate momentum (default 15 bars). it compares the current session’s price-volume score to the score 15 bars ago to gauge momentum strength.
how it works: a longer lookback smooths out the signal, focusing on bigger trends. a shorter one makes SPR more sensitive to recent moves.
how to adjust: on a 1-hour chart, 15 bars is 15 hours (about 2 trading days). if you’re on a shorter timeframe like 5 minutes, 15 bars is just 75 minutes, so you might want to increase it to 50 or 100 to capture more meaningful trends. if you’re trading a choppy stock, a shorter lookback (like 5) can help catch quick moves, but it might give more false signals.
spr_threshold = input.float (0.7, "SPR Threshold")
this is the cutoff for SPR to vote for a trade (default 0.7). if SPR’s normalized value is above 0.7, it votes for a long; below -0.7, it votes for a short.
how it works: SPR normalizes its momentum score by ATR, so this threshold ensures only strong moves count. a higher threshold means fewer trades but higher conviction.
how to adjust: if you’re getting too few trades, lower it to 0.5 to let more signals through. if you’re seeing too many false entries, raise it to 1.0 for stricter filtering. test on your chart to find a balance.
spr_atr_length = input.int(21, "SPR ATR Length") this sets the ATR period (default 21 bars) used to normalize SPR’s momentum score. ATR measures volatility, so this makes SPR’s signal relative to market conditions.
how it works: a longer ATR period (like 21) smooths out volatility, making SPR less jumpy. a shorter one makes it more reactive.
how to adjust: if you’re trading a volatile stock like TSLA, a longer period (30 or 50) can help avoid noise. for a calmer stock, try 10 to make SPR more responsive. match this to your timeframe—shorter timeframes might need a shorter ATR.
rth_session = input.session("0930-1600","SPR: RTH Sess.") rth_timezone = "America/New_York" this defines the session SPR uses (0930-1600, New York time). SPR only calculates momentum during these hours to focus on RTH activity.
how it works: it ignores pre-market or after-hours noise, ensuring SPR captures the main market action.
how to adjust: if you trade a different session (like London hours, 0300-1200 EST), change the session to match. you can also adjust the timezone if you’re in a different region, like "Europe/London". just make sure your chart’s timezone aligns with this setting.
VWMO (Volume-Weighted Momentum Oscillator)
VWMO measures momentum weighted by volume to spot sustained, high-conviction moves.
vwmo_momlen = input.int(21, "VWMO Momentum Length") this sets how many bars back VWMO looks to calculate price momentum (default 21 bars). it takes the price change (close minus close 21 bars ago).
how it works: a longer period captures bigger trends, while a shorter one reacts to recent swings.
how to adjust: on a daily chart, 21 bars is about a month—good for trend trading. on a 5-minute chart, it’s just 105 minutes, so you might bump it to 50 or 100 for more meaningful moves. if you want faster signals, drop it to 10, but expect more noise.
vwmo_volback = input.int(30, "VWMO Volume Lookback") this sets the period for calculating average volume (default 30 bars). VWMO weights momentum by volume divided by this average.
how it works: it compares current volume to the average to see if a move has strong participation. a longer lookback smooths the average, while a shorter one makes it more sensitive.
how to adjust: for stocks with spiky volume (like NVDA on earnings), a longer lookback (50 or 100) avoids overreacting to one-off spikes. for steady volume stocks, try 20. match this to your timeframe—shorter timeframes might need a shorter lookback.
vwmo_smooth = input.int(9, "VWMO Smoothing")
this sets the SMA period to smooth VWMO’s raw momentum (default 9 bars).
how it works: smoothing reduces noise in the signal, making VWMO more reliable for voting. a longer smoothing period cuts more noise but adds lag.
how to adjust: if VWMO is too jumpy (lots of false votes), increase to 15. if it’s too slow and missing trades, drop to 5. test on your chart to see what keeps the signal clean but responsive.
vwmo_threshold = input.float(10, "VWMO Threshold") this is the cutoff for VWMO to vote for a trade (default 10). above 10, it votes for a long; below -10, a short.
how it works: it ensures only strong momentum signals count. a higher threshold means fewer but stronger trades.
how to adjust: if you want more trades, lower it to 5. if you’re getting too many weak signals, raise it to 15. this depends on your market—volatile stocks might need a higher threshold to filter noise.
VEI (Velocity Efficiency Index)
VEI measures market efficiency and velocity to filter out choppy moves and focus on strong trends.
vei_eflen = input.int(14, "VEI Efficiency Smoothing") this sets the EMA period for smoothing VEI’s efficiency calc (bar range / volume, default 14 bars).
how it works: efficiency is how much price moves per unit of volume. smoothing it with an EMA reduces noise, focusing on consistent efficiency. a longer period smooths more but adds lag.
how to adjust: for choppy markets, increase to 20 to filter out noise. for faster markets, drop to 10 for quicker signals. this should match your timeframe—shorter timeframes might need a shorter period.
vei_momlen = input.int(8, "VEI Momentum Length") this sets how many bars back VEI looks to calculate momentum in efficiency (default 8 bars).
how it works: it measures the change in smoothed efficiency over 8 bars, then adjusts for inertia (volume-to-range). a longer period captures bigger shifts, while a shorter one reacts faster.
how to adjust: if VEI is missing quick reversals, drop to 5. if it’s too noisy, raise to 12. test on your chart to see what catches the right moves without too many false signals.
vei_threshold = input.float(4.5, "VEI Threshold") this is the cutoff for VEI to vote for a trade (default 4.5). above 4.5, it votes for a long; below -4.5, a short.
how it works: it ensures only strong, efficient moves count. a higher threshold means fewer trades but higher quality.
how to adjust: if you’re not getting enough trades, lower to 3. if you’re seeing too many false entries, raise to 6. this depends on your market—fast stocks like NQ1 might need a lower threshold.
Features
Multi-Signal Voting: requires all three signals (SPR, VWMO, VEI) to align for a trade, ensuring high-probability setups.
Risk Management: uses ATR-based stops (2.1x) and take-profits (4.1x), with dynamic position sizing based on a risk percentage (default 0.4%).
Market Filters: ADX (default 27) ensures trending conditions, choppiness index (default 54.5) avoids sideways markets, and ATR expansion (default 1.12) confirms volatility.
Dashboard: provides real-time stats like SPR, VWMO, VEI values, net P/L, win rate, and streak, with a clean, functional design.
Visuals
EXODUS prioritizes performance over visuals, as it was built for competitive and competition purposes. entry/exit signals are marked with simple labels and shapes, and a basic heatmap highlights market regimes. a more visually stunning update may be released later, with enhanced graphics and overlays.
Usage
EXODUS is designed for stocks and ETFs but can be adapted for futures with adjustments. it performs best in trending markets with sufficient volatility, as confirmed by its generic tuning across symbols like TSLA, AMD, NVDA, and NQ1. adjust inputs like SPR threshold, VWMO smoothing, or VEI momentum length to suit specific assets or timeframes.
Setting I used: (Again, these are a generic setting, each security needs to be fine tuned)
SPR LB = 19 SPR TH = 0.5 SPR ATR L= 21 SPR RTH Sess: 9:30 – 16:00
VWMO L = 21 VWMO LB = 18 VWMO S = 6 VWMO T = 8
VEI ES = 14 VEI ML = 21 VEI T = 4
R % = 0.4
ATR L = 21 ATR M (S) =1.1 TP Multi = 2.1 ATR min mult = 0.8 ATR Expansion = 1.02
ADX L = 21 Min ADX = 25
Choppiness Index = 14 Chop. Max T = 55.5
Backtesting: TSLA
Frame: Jan 02, 2018, 08:00 — May 01, 2025, 09:00
Slippage: 3
Commission .01
Disclaimer
this strategy is for educational purposes. past performance is not indicative of future results. trading involves significant risk, and you should only trade with capital you can afford to lose. always backtest and validate any strategy before using it in live markets.
(This publishing will most likely be taken down do to some miscellaneous rule about properly displaying charting symbols, or whatever. Once I've identified what part of the publishing they want to pick on, I'll adjust and repost.)
About the Author
Dskyz (DAFE) Trading Systems is dedicated to building high-performance trading algorithms. EXODUS is a product of rigorous research and development, aimed at delivering consistent, and data-driven trading solutions.
Use it with discipline. Use it with clarity. Trade smarter.
**I will continue to release incredible strategies and indicators until I turn this into a brand or until someone offers me a contract.
2025 Created by Dskyz, powered by DAFE Trading Systems. Trade smart, trade bold.
Volume Predictor [PhenLabs]📊 Volume Predictor
Version: PineScript™ v6
📌 Description
The Volume Predictor is an advanced technical indicator that leverages machine learning and statistical modeling techniques to forecast future trading volume. This innovative tool analyzes historical volume patterns to predict volume levels for upcoming bars, providing traders with valuable insights into potential market activity. By combining multiple prediction algorithms with pattern recognition techniques, the indicator delivers forward-looking volume projections that can enhance trading strategies and market analysis.
🚀 Points of Innovation:
Machine learning pattern recognition using Lorentzian distance metrics
Multi-algorithm prediction framework with algorithm selection
Ensemble learning approach combining multiple prediction methods
Real-time accuracy metrics with visual performance dashboard
Dynamic volume normalization for consistent scale representation
Forward-looking visualization with configurable prediction horizon
🔧 Core Components
Pattern Recognition Engine : Identifies similar historical volume patterns using Lorentzian distance metrics
Multi-Algorithm Framework : Offers five distinct prediction methods with configurable parameters
Volume Normalization : Converts raw volume to percentage scale for consistent analysis
Accuracy Tracking : Continuously evaluates prediction performance against actual outcomes
Advanced Visualization : Displays actual vs. predicted volume with configurable future bar projections
Interactive Dashboard : Shows real-time performance metrics and prediction accuracy
🔥 Key Features
The indicator provides comprehensive volume analysis through:
Multiple Prediction Methods : Choose from Lorentzian, KNN Pattern, Ensemble, EMA, or Linear Regression algorithms
Pattern Matching : Identifies similar historical volume patterns to project future volume
Adaptive Predictions : Generates volume forecasts for multiple bars into the future
Performance Tracking : Calculates and displays real-time prediction accuracy metrics
Normalized Scale : Presents volume as a percentage of historical maximums for consistent analysis
Customizable Visualization : Configure how predictions and actual volumes are displayed
Interactive Dashboard : View algorithm performance metrics in a customizable information panel
🎨 Visualization
Actual Volume Columns : Color-coded green/red bars showing current normalized volume
Prediction Columns : Semi-transparent blue columns representing predicted volume levels
Future Bar Projections : Forward-looking volume predictions with configurable transparency
Prediction Dots : Optional white dots highlighting future prediction points
Reference Lines : Visual guides showing the normalized volume scale
Performance Dashboard : Customizable panel displaying prediction method and accuracy metrics
📖 Usage Guidelines
History Lookback Period
Default: 20
Range: 5-100
This setting determines how many historical bars are analyzed for pattern matching. A longer period provides more historical data for pattern recognition but may reduce responsiveness to recent changes. A shorter period emphasizes recent market behavior but might miss longer-term patterns.
🧠 Prediction Method
Algorithm
Default: Lorentzian
Options: Lorentzian, KNN Pattern, Ensemble, EMA, Linear Regression
Selects the algorithm used for volume prediction:
Lorentzian: Uses Lorentzian distance metrics for pattern recognition, offering excellent noise resistance
KNN Pattern: Traditional K-Nearest Neighbors approach for historical pattern matching
Ensemble: Combines multiple methods with weighted averaging for robust predictions
EMA: Simple exponential moving average projection for trend-following predictions
Linear Regression: Projects future values based on linear trend analysis
Pattern Length
Default: 5
Range: 3-10
Defines the number of bars in each pattern for machine learning methods. Shorter patterns increase sensitivity to recent changes, while longer patterns may identify more complex structures but require more historical data.
Neighbors Count
Default: 3
Range: 1-5
Sets the K value (number of nearest neighbors) used in KNN and Lorentzian methods. Higher values produce smoother predictions by averaging more historical patterns, while lower values may capture more specific patterns but could be more susceptible to noise.
Prediction Horizon
Default: 5
Range: 1-10
Determines how many future bars to predict. Longer horizons provide more forward-looking information but typically decrease accuracy as the prediction window extends.
📊 Display Settings
Display Mode
Default: Overlay
Options: Overlay, Prediction Only
Controls how volume information is displayed:
Overlay: Shows both actual volume and predictions on the same chart
Prediction Only: Displays only the predictions without actual volume
Show Prediction Dots
Default: false
When enabled, adds white dots to future predictions for improved visibility and clarity.
Future Bar Transparency (%)
Default: 70
Range: 0-90
Controls the transparency of future prediction bars. Higher values make future bars more transparent, while lower values make them more visible.
📱 Dashboard Settings
Show Dashboard
Default: true
Toggles display of the prediction accuracy dashboard. When enabled, shows real-time accuracy metrics.
Dashboard Location
Default: Bottom Right
Options: Top Left, Top Right, Bottom Left, Bottom Right
Determines where the dashboard appears on the chart.
Dashboard Text Size
Default: Normal
Options: Small, Normal, Large
Controls the size of text in the dashboard for various display sizes.
Dashboard Style
Default: Solid
Options: Solid, Transparent
Sets the visual style of the dashboard background.
Understanding Accuracy Metrics
The dashboard provides key performance metrics to evaluate prediction quality:
Average Error
Shows the average difference between predicted and actual values
Positive values indicate the prediction tends to be higher than actual volume
Negative values indicate the prediction tends to be lower than actual volume
Values closer to zero indicate better prediction accuracy
Accuracy Percentage
A measure of how close predictions are to actual outcomes
Higher percentages (>70%) indicate excellent prediction quality
Moderate percentages (50-70%) indicate acceptable predictions
Lower percentages (<50%) suggest weaker prediction reliability
The accuracy metrics are color-coded for quick assessment:
Green: Strong prediction performance
Orange: Moderate prediction performance
Red: Weaker prediction performance
✅ Best Use Cases
Anticipate upcoming volume spikes or drops
Identify potential volume divergences from price action
Plan entries and exits around expected volume changes
Filter trading signals based on predicted volume support
Optimize position sizing by forecasting market participation
Prepare for potential volatility changes signaled by volume predictions
Enhance technical pattern analysis with volume projection context
⚠️ Limitations
Volume predictions become less accurate over longer time horizons
Performance varies based on market conditions and asset characteristics
Works best on liquid assets with consistent volume patterns
Requires sufficient historical data for pattern recognition
Sudden market events can disrupt prediction accuracy
Volume spikes may be muted in predictions due to normalization
💡 What Makes This Unique
Machine Learning Approach : Applies Lorentzian distance metrics for robust pattern matching
Algorithm Selection : Offers multiple prediction methods to suit different market conditions
Real-time Accuracy Tracking : Provides continuous feedback on prediction performance
Forward Projection : Visualizes multiple future bars with configurable display options
Normalized Scale : Presents volume as a percentage of maximum volume for consistent analysis
Interactive Dashboard : Displays key metrics with customizable appearance and placement
🔬 How It Works
The Volume Predictor processes market data through five main steps:
1. Volume Normalization:
Converts raw volume to percentage of maximum volume in lookback period
Creates consistent scale representation across different timeframes and assets
Stores historical normalized volumes for pattern analysis
2. Pattern Detection:
Identifies similar volume patterns in historical data
Uses Lorentzian distance metrics for robust similarity measurement
Determines strength of pattern match for prediction weighting
3. Algorithm Processing:
Applies selected prediction algorithm to historical patterns
For KNN/Lorentzian: Finds K nearest neighbors and calculates weighted prediction
For Ensemble: Combines multiple methods with optimized weighting
For EMA/Linear Regression: Projects trends based on statistical models
4. Accuracy Calculation:
Compares previous predictions to actual outcomes
Calculates average error and prediction accuracy
Updates performance metrics in real-time
5. Visualization:
Displays normalized actual volume with color-coding
Shows current and future volume predictions
Presents performance metrics through interactive dashboard
💡 Note:
The Volume Predictor performs optimally on liquid assets with established volume patterns. It’s most effective when used in conjunction with price action analysis and other technical indicators. The multi-algorithm approach allows adaptation to different market conditions by switching prediction methods. Pay special attention to the accuracy metrics when evaluating prediction reliability, as sudden market changes can temporarily reduce prediction quality. The normalized percentage scale makes the indicator consistent across different assets and timeframes, providing a standardized approach to volume analysis.
Accurate Bollinger Bands mcbw_ [True Volatility Distribution]The Bollinger Bands have become a very important technical tool for discretionary and algorithmic traders alike over the last decades. It was designed to give traders an edge on the markets by setting probabilistic values to different levels of volatility. However, some of the assumptions that go into its calculations make it unusable for traders who want to get a correct understanding of the volatility that the bands are trying to be used for. Let's go through what the Bollinger Bands are said to show, how their calculations work, the problems in the calculations, and how the current indicator I am presenting today fixes these.
--> If you just want to know how the settings work then skip straight to the end or click on the little (i) symbol next to the values in the indicator settings window when its on your chart <--
--------------------------- What Are Bollinger Bands ---------------------------
The Bollinger Bands were formed in the 1980's, a time when many retail traders interacted with their symbols via physically printed charts and computer memory for personal computer memory was measured in Kb (about a factor of 1 million smaller than today). Bollinger Bands are designed to help a trader or algorithm see the likelihood of price expanding outside of its typical range, the further the lines are from the current price implies the less often they will get hit. With a hands on understanding many strategies use these levels for designated levels of breakout trades or to assist in defining price ranges.
--------------------------- How Bollinger Bands Work ---------------------------
The calculations that go into Bollinger Bands are rather simple. There is a moving average that centers the indicator and an equidistant top band and bottom band are drawn at a fixed width away. The moving average is just a typical moving average (or common variant) that tracks the price action, while the distance to the top and bottom bands is a direct function of recent price volatility. The way that the distance to the bands is calculated is inspired by formulas from statistics. The standard deviation is taken from the candles that go into the moving average and then this is multiplied by a user defined value to set the bands position, I will call this value 'the multiple'. When discussing Bollinger Bands, that trading community at large normally discusses 'the multiple' as a multiplier of the standard deviation as it applies to a normal distribution (gaußian probability). On a normal distribution the number of standard deviations away (which trades directly use as 'the multiple') you are directly corresponds to how likely/unlikely something is to happen:
1 standard deviation equals 68.3%, meaning that the price should stay inside the 1 standard deviation 68.3% of the time and be outside of it 31.7% of the time;
2 standard deviation equals 95.5%, meaning that the price should stay inside the 2 standard deviation 95.5% of the time and be outside of it 4.5% of the time;
3 standard deviation equals 99.7%, meaning that the price should stay inside the 3 standard deviation 99.7% of the time and be outside of it 0.3% of the time.
Therefore when traders set 'the multiple' to 2, they interpret this as meaning that price will not reach there 95.5% of the time.
---------------- The Problem With The Math of Bollinger Bands ----------------
In and of themselves the Bollinger Bands are a great tool, but they have become misconstrued with some incorrect sense of statistical meaning, when they should really just be taken at face value without any further interpretation or implication.
In order to explain this it is going to get a bit technical so I will give a little math background and try to simplify things. First let's review some statistics topics (distributions, percentiles, standard deviations) and then with that understanding explore the incorrect logic of how Bollinger Bands have been interpreted/employed.
---------------- Quick Stats Review ----------------
.
(If you are comfortable with statistics feel free to skip ahead to the next section)
.
-------- I: Probability distributions --------
When you have a lot of data it is helpful to see how many times different results appear in your dataset. To visualize this people use "histograms", which just shows how many times each element appears in the dataset by stacking each of the same elements on top of each other to form a graph. You may be familiar with the bell curve (also called the "normal distribution", which we will be calling it by). The normal distribution histogram looks like a big hump around zero and then drops off super quickly the further you get from it. This shape (the bell curve) is very nice because it has a lot of very nifty mathematical properties and seems to show up in nature all the time. Since it pops up in so many places, society has developed many different shortcuts related to it that speed up all kinds of calculations, including the shortcut that 1 standard deviation = 68.3%, 2 standard deviations = 95.5%, and 3 standard deviations = 99.7% (these only apply to the normal distribution). Despite how handy the normal distribution is and all the shortcuts we have for it are, and how much it shows up in the natural world, there is nothing that forces your specific dataset to look like it. In fact, your data can actually have any possible shape. As we will explore later, economic and financial datasets *rarely* follow the normal distribution.
-------- II: Percentiles --------
After you have made the histogram of your dataset you have built the "probability distribution" of your own dataset that is specific to all the data you have collected. There is a whole complicated framework for how to accurately calculate percentiles but we will dramatically simplify it for our use. The 'percentile' in our case is just the number of data points we are away from the "middle" of the data set (normally just 0). Lets say I took the difference of the daily close of a symbol for the last two weeks, green candles would be positive and red would be negative. In this example my dataset of day by day closing price difference is:
week 1:
week 2:
sorting all of these value into a single dataset I have:
I can separate the positive and negative returns and explore their distributions separately:
negative return distribution =
positive return distribution =
Taking the 25th% percentile of these would just be taking the value that is 25% towards the end of the end of these returns. Or akin the 100%th percentile would just be taking the vale that is 100% at the end of those:
negative return distribution (50%) = -5
positive return distribution (50%) = +4
negative return distribution (100%) = -10
positive return distribution (100%) = +20
Or instead of separating the positive and negative returns we can also look at all of the differences in the daily close as just pure price movement and not account for the direction, in this case we would pool all of the data together by ignoring the negative signs of the negative reruns
combined return distribution =
In this case the 50%th and 100%th percentile of the combined return distribution would be:
combined return distribution (50%) = 4
combined return distribution (100%) = 10
Sometimes taking the positive and negative distributions separately is better than pooling them into a combined distribution for some purposes. Other times the combined distribution is better.
Most financial data has very different distributions for negative returns and positive returns. This is encapsulated in sayings like "Price takes the stairs up and the elevator down".
-------- III: Standard Deviation --------
The formula for the standard deviation (refereed to here by its shorthand 'STDEV') can be intimidating, but going through each of its elements will illuminate what it does. The formula for STDEV is equal to:
square root ( (sum ) / N )
Going back the the dataset that you might have, the variables in the formula above are:
'mean' is the average of your entire dataset
'x' is just representative of a single point in your dataset (one point at a time)
'N' is the total number of things in your dataset.
Going back to the STDEV formula above we can see how each part of it works. Starting with the '(x - mean)' part. What this does is it takes every single point of the dataset and measure how far away it is from the mean of the entire dataset. Taking this value to the power of two: '(x - mean) ^ 2', means that points that are very far away from the dataset mean get 'penalized' twice as much. Points that are very close to the dataset mean are not impacted as much. In practice, this would mean that if your dataset had a bunch of values that were in a wide range but always stayed in that range, this value ('(x - mean) ^ 2') would end up being small. On the other hand, if your dataset was full of the exact same number, but had a couple outliers very far away, this would have a much larger value since the square par of '(x - mean) ^ 2' make them grow massive. Now including the sum part of 'sum ', this just adds up all the of the squared distanced from the dataset mean. Then this is divided by the number of values in the dataset ('N'), and then the square root of that value is taken.
There is nothing inherently special or definitive about the STDEV formula, it is just a tool with extremely widespread use and adoption. As we saw here, all the STDEV formula is really doing is measuring the intensity of the outliers.
--------------------------- Flaws of Bollinger Bands ---------------------------
The largest problem with Bollinger Bands is the assumption that price has a normal distribution. This is assumption is massively incorrect for many reasons that I will try to encapsulate into two points:
Price return do not follow a normal distribution, every single symbol on every single timeframe has is own unique distribution that is specific to only itself. Therefore all the tools, shortcuts, and ideas that we use for normal distributions do not apply to price returns, and since they do not apply here they should not be used. A more general approach is needed that allows each specific symbol on every specific timeframe to be treated uniquely.
The distributions of price returns on the positive and negative side are almost never the same. A more general approach is needed that allows positive and negative returns to be calculated separately.
In addition to the issues of the normal distribution assumption, the standard deviation formula (as shown above in the quick stats review) is essentially just a tame measurement of outliers (a more aggressive form of outlier measurement might be taking the differences to the power of 3 rather than 2). Despite this being a bit of a philosophical question, does the measurement of outlier intensity as defined by the STDEV formula really measure what we want to know as traders when we're experiencing volatility? Or would adjustments to that formula better reflect what we *experience* as volatility when we are actively trading? This is an open ended question that I will leave here, but I wanted to pose this question because it is a key part of what how the Bollinger Bands work that we all assume as a given.
Circling back on the normal distribution assumption, the standard deviation formula used in the calculation of the bands only encompasses the deviation of the candles that go into the moving average and have no knowledge of the historical price action. Therefore the level of the bands may not really reflect how the price action behaves over a longer period of time.
------------ Delivering Factually Accurate Data That Traders Need------------
In light of the problems identified above, this indicator fixes all of these issue and delivers statistically correct information that discretionary and algorithmic traders can use, with truly accurate probabilities. It takes the price action of the last 2,000 candles and builds a huge dataset of distributions that you can directly select your percentiles from. It also allows you to have the positive and negative distributions calculated separately, or if you would like, you can pool all of them together in a combined distribution. In addition to this, there is a wide selection of moving averages directly available in the indicator to choose from.
Hedge funds, quant shops, algo prop firms, and advanced mechanical groups all employ the true return distributions in their work. Now you have access to the same type of data with this indicator, wherein it's doing all the lifting for you.
------------------------------ Indicator Settings ------------------------------
.
---- Moving average ----
Select the type of moving average you would like and its length
---- Bands ----
The percentiles that you enter here will be pulled directly from the return distribution of the last 2,000 candles. With the typical Bollinger Bands, traders would select 2 standard deviations and incorrectly think that the levels it highlights are the 95.5% levels. Now, if you want the true 95.5% level, you can just enter 95.5 into the percentile value here. Each of the three available bands takes the true percentile you enter here.
---- Separate Positive & Negative Distributions----
If this box is checked the positive and negative distributions are treated indecently, completely separate from each other. You will see that the width of the top and bottom bands will be different for each of the percentiles you enter.
If this box is unchecked then all the negative and positive distributions are pooled together. You will notice that the width of the top and bottom bands will be the exact same.
---- Distribution Size ----
This is the number of candles that the price return is calculated over. EG: to collect the price return over the last 33 candles, the difference of price from now to 33 candles ago is calculated for the last 2,000 candles, to build a return distribution of 2000 points of price differences over 33 candles.
NEGATIVE NUMBERS(<0) == exact number of candles to include;
EG: setting this value to -20 will always collect volatility distributions of 20 candles
POSITIVE NUMBERS(>0) == number of candles to include as a multiple of the Moving Average Length value set above;
EG: if the Moving Average Length value is set to 22, setting this value to 2 will use the last 22*2 = 44 candles for the collection of volatility distributions
MORE candles being include will generally make the bands WIDER and their size will change SLOWER over time.
I wish you focus, dedication, and earnest success on your journey.
Happy trading :)
[Pandora] Error Function Treasure Trove - ERF/ERFI/Sigmoids+PRAISE:
At this time, I have to graciously thank the wonderful minds behind the new "Pine Profiler Mode" (PPM). Directly prior to this release, it allowed me to ascertain script performance even more. While I usually write mostly in highly optimized Pine code, PPM visually identified a few bottlenecks that would otherwise be hard to identify. Anyone who contributed to PPMs creation and testing before release... BRAVO!!! I commend all of those who assisted in it's state-of-the-art engineering and inception, well done!
BACKSTORY:
This script is specifically being released in defense of another member, an exceptionally unique PhD. It was brought to my attention that a script-mod-event occurred, regarding the publishing of a measly antiquated error function (ERF) calculation within his script. This sadly resulted in the now former member jumping ship after receiving unmannerly responses amidst his curious inquiries as to why his erf() was modded. To forbid rusty and rudimentary formulations because a mod-on-duty is temporally offended by a non-nefarious release of code, is in MY opinion an injustice to principles of perpetuating open-source code intended to benefit thousands to millions of community members. While Pine is the heart and soul of TV, the mathematical concepts contributed from the minds of members is the inspirational fuel of curiosity that powers it's pertinent reason to exist and evolve.
It is an indisputable fact that most members are not greatly skilled Pine Poets. Many members may be incapable of innovating robust function code in Pine, even if they have one or more PhDs. We ALL come from various disciplines of mathematical comprehension and education. Some mathematicians are not greatly skilled at coding, while some coders are not exceptional at math. So... what am I to do to attempt to resolve this circumstantial challenge??? Those who know me best are aware that I will always side with "the right side of history" in order to accomplish my primary self-defined missions I choose to accept. Serving as an algorithmic advocate, I felt compelled to intercede by compiling numerous error functions into elegant code of very high caliber that any and every TV member may choose to employ, so this ERROR never happens again.
After weeks of contemplation into algorithms I knew little about, I prioritized myself to resolve an unanticipated matter by creating advanced formulas of exquisitely crafted error functions refined to the best of my current abilities. My aversion for unresolved problems motivated me to eviscerate error function insufficiencies with many more rigid formulations beyond what is thought to exist. ERF needed a proper algorithmic exorcism anyways. In my furiosity, I contemplated an array of madMAXimum diplomatic demolition methods, choosing the chain saw massacre technique to slaughter dysfunctionalities I encountered on a battered ERF roadway. This resulted in prolific solutions that should assuredly endure the test of time. Poetically, as you will come to see, I am ripping the lid off of Pandora's box of error functions in this case to correct wrongs into a splendid bundle of rights for members.
INTENTION:
Error function (ERF) enthusiasts... PREPARE FOR GLORY!! The specific purpose of this script is to deprecate classic error functions with the creation of a fierce and formidable army of superior formulations, each having varying attributes of computational complexity with differing absolute error ranges in their results for multiple compute scenarios. This is NOT an indicator... It is intended to allow members to embark on endeavors to advance the profound knowledge base of this growing worldwide community of 60+ million inquisitive minds. For those of you who believe computational mathematics and statistics is near completion at its finest; I am here to inform you, this is ridiculous to ponder. We are no where near statistical excellence that can and will exist eventually. At this time, metaphorically speaking, we are merely scratching microns off of the surface of the skin of a statistical apple Isaac Newton once pondered.
THIS RELEASE:
Following weeks of pondering methodical experiments beyond the ordinary, I am liberating these wild notions of my error function explorations to the entire globe as copyleft code, not just Pine. This Pandora's basket of ERFs is being openly disclosed for the sake of the sanctity of mathematics, empirical science (not the garbage we are told by CONTROLocrats to blindly trust), revolutionary cutting edge engineering, cosmology, physics, information technology, artificial intelligence, and EVERY other mathematical branch of human knowledge being discovered over centuries. I do believe James Glaisher would favor my aims concerning ERF aspirations embracing the "Power of Pine".
The included functions are intended for TV members to use in any way they see fit. This is a gift to ALL members to foster future innovative excellence on this platform. Any attempt to moderate this code without notification of "self-evident clear and just cause" will be considered an irrevocable egregious action. The original foundational PURPOSE of establishing script moderation (I clearly remember) was primarily to maintain active vigilance over a growing community against intentional nefarious actions and/or behaviors in blatant disrespect to other author's works AND also thwart rampant copypasting bandit operations, all while accommodating balanced principles of fairness for an educational community cause via open source publishing that should support future algorithmic inventions well beyond my lifespan.
APPLICATIONS:
The related error functions are used in probability theory, statistics, and numerous and engineering scientific disciplines. Its key characteristics and applications are innumerable in computational realms. Its versatility and significance make it a fundamental tool in arenas of quantitative analysis and scientific research...
Probability Theory - Is widely used in probability theory to calculate probabilities and quantiles of the normal distribution.
Statistics - It's related to the Gaussian integral and plays a crucial role in statistics, especially in hypothesis testing and confidence interval calculations.
Physics - In physics, it arises in the study of diffusion equations, quantum mechanics, and heat conduction problems.
Engineering - Applications exist in engineering disciplines such as signal processing, control theory, and telecommunications.
Error Analysis - It's employed in error analysis and uncertainty quantification.
Numeric Approximations - Due to its lack of a closed-form expression, numerical methods are often employed to approximate erf/erfi().
AI, LLMs, & MACHINE LEARNING:
The error function (ERF) is indispensable to various AI applications, particularly due to its relation to Gaussian distributions and error analysis. It is used in Gaussian processes for regression and classification, probabilistic inference for Bayesian networks, soft margin computation in SVMs, neural networks involving Gaussian activation functions or noise, and clustering algorithms like Gaussian Mixture Models. Improved ERF approximations can enhance precision in these applications, reduce computational complexity, handle outliers and noise better, and improve optimization and convergence, possibly leading to more accurate, efficient, and robust AI systems.
BONUS ALGORITHMS:
While ERFs are versatile, its opposite also exists in the form of inverse error functions (ERFIs). I have also included a modified form of the inverse fisher transform along side MY sigmoid (sigmyod). I am uncertain what sigmyod() may be used for, but it's a culmination of my examinations deep into "sigmoid domains", something I am fascinated by. Whatever implications it may possess, I am unveiling it along with it's cousin functions. For curious minds, this quality of composition seen here is ideally what underlies what I would term "Pandora functionality" that empowers my Pandora indication. I go through hordes of formulations, testing, and inspection to find what appears to be the most beneficial logical/mathematical equation to apply...
SCRIPT OPERATION:
To showcase the characteristics and performance of my ERF/ERFI formulations, I devised a multi-modal script. By using bar_index , I generated a broad sequence of numeric values to input into the first ERF/ERFI parameter. These sequences allow you to inspect the contours of the error function's outputs for both ERF and ERFI. When combined with compute-intensive precision functions (CIPFs), the polynomial function output values can be subtracted from my CIPFs to obtain results of absolute error, displaying the accuracy of the many polynomial estimation functions I tuned in testing for Pine's float environment.
A host of numeric input settings are wildly adjustable to inspect values/curvatures across the range of numeric input sequences. Very large numbers, such as Divisor:100,000,100/Offset:200,000,000 for ERF modes or... Divisor:100,000,100/Offset:100,000,000 for ERFI modes, will display miniscule output values calculated from input values in close proximity to 0.0 for the various estimates, similar to a microscope. ERFI approximations very near in proximity to +/-1.0 will always yield large deviations of absolute error. Dragging/zooming your chart or using the Offset input will aid with visually clipping off those ERFI extremes where float precision functions cannot suffice.
NOTICE:
perf() and perfi() are intended for precision computation (as good as it basically gets) in a float environment. However, they are CPU intensive (especially perfi). I wouldn't recommend these being used in ANY Pine script unless it's an "absolute necessity" to do so to accomplish your goal. I only built them to obtain "absolute error curvatures" of the error functions for the polynomial approximations. These are visible in the accuracy modes in the indicator Settings.