Machine Learning: LVQ-based StrategyLVQ-based Strategy (FX and Crypto)
Description:
Learning Vector Quantization (LVQ) can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all learning-based approach. It is based on prototype supervised learning classification task and trains its weights through a competitive learning algorithm.
Algorithm:
Initialize weights
Train for 1 to N number of epochs
- Select a training example
- Compute the winning vector
- Update the winning vector
Classify test sample
The LVQ algorithm offers a framework to test various indicators easily to see if they have got any *predictive value*. One can easily add cog, wpr and others.
Note: TradingViews's playback feature helps to see this strategy in action. The algo is tested with BTCUSD/1Hour.
Warning: This is a preliminary version! Signals ARE repainting.
***Warning***: Signals LARGELY depend on hyperparams (lrate and epochs).
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+++/Days
Cari dalam skrip untuk "algo"
MIDAS VWAP Jayy his is just a bash together of two MIDAS VWAP scripts particularly AkifTokuz and drshoe.
I added the ability to show more MIDAS curves from the same script.
The algorithm primarily uses the "n" number but the date can be used for the 8th VWAP
I have not converted the script to version 3.
To find bar number go into "Chart Properties" select " "background" then select Indicator Titles and "Indicator values". When you place your cursor over a bar the first number you see adjacent to the script title is the bar number. Put that in the dialogue box midline is MIDAS VWAP . The resistance is a MIDAS VWAP using bar highs. The resistance is MIDAS VWAP using bar lows.
In most case using N will suffice. However, if you are flipping around charts inputting a specific date can be handy. In this way, you can compare the same point in time across multiple instruments eg first trading day of the year or an election date.
Adding dates into the dialogue box is a bit cumbersome so in this version, it is enabled for only one curve. I have called it VWAP and it follows the typical VWAP algorithm. (Does that make a difference? Read below re my opinion on the Difference between MIDAS VWAP and VWAP ).
I have added the ability to start from the bottom or top of the initiating bar.
In theory in a probable uptrend pick a low of a bar for a low pivot and start the MIDAS VWAP there using the support.
For a downtrend use the high pivot bar and select resistance. The way to see is to play with these values.
Difference between MIDAS VWAP and the regular VWAP
MIDAS itself as described by Levine uses a time anchored On-Balance Volume (OBV) plotted on a graph where the horizontal (abscissa) arm of the graph is cumulative volume not time. He called his VWAP curves Support/Resistance VWAP or S/R curves. These S/R curves are often referred to as "MIDAS curves".
These are the main components of the MIDAS chart. A third algorithm called the Top-Bottom Finder was also described. (Separate script).
Additional tools have been described in "MIDAS_Technical_Analysis"
Midas Technical Analysis: A VWAP Approach to Trading and Investing in Today’s Markets by Andrew Coles, David G. Hawkins
Copyright © 2011 by Andrew Coles and David G. Hawkins.
Denoting the different way in which Levine approached the calculation.
The difference between "MIDAS" VWAP and VWAP is, in my opinion, much ado about nothing. The algorithms generate identical curves albeit the MIDAS algorithm launches the curve one bar later than the VWAP algorithm which can be a pain in the neck. All of the algorithms that I looked at on Tradingview step back one bar in time to initiate the MIDAS curve. As such the plotted curves are identical to traditional VWAP assuming the initiation is from the candle/bar midpoint.
How did Levine intend the curves to be drawn?
On a reversal, he suggested the initiation of the Support and Resistance VVWAP (S/R curve) to be started after a reversal.
It is clear in his examples this happens occasionally but in many cases he initiates the so-called MIDAS S/R VWAP right at the reversal point. In any case, the algorithm is problematic if you wish to start a curve on the first bar of an IPO .
You will get nothing. That is a pain. Also in Levine's writings, he describes simply clicking on the point where a
S/R VWAP is to be drawn from. As such, the generally accepted method of initiating the curve at N-1 is a practical and sensible method. The only issue is that you cannot draw the curve from the first bar on any security, as mentioned without resorting to the typical VWAP algorithm. There is another difference. VWAP is launched from the middle of the bar (as per AlphaTrends), You can also launch from the top of the bar or the bottom (or anywhere for that matter). The calculation proceeds using the top or bottom for each new bar.
The potential applications are discussed in the MIDAS Technical Analysis book.
Opening Range Gaps [TakingProphets]What is an Opening Range Gap (ORG)?
In ICT, the Opening Range Gap is defined as the price difference between the previous session’s close (e.g., 4:00 PM EST in U.S. indices) and the current day’s open (9:30 AM EST).
That gap is a liquidity void—an area where no trading occurred during regular hours.
Why ICT Traders Care About ORG
Liquidity Void (Gap Fill Logic)
-Because the gap is an untraded area, it naturally acts as a draw on liquidity.
-Price often seeks to rebalance by retracing into or fully filling this void.
Premium/Discount Sensitivity
-Once the ORG is defined, ICT treats it as a mini dealing range.
-Above EQ (Consequent Encroachment) = algorithmic premium (sell-sensitive).
-Below EQ = algorithmic discount (buy-sensitive).
-Price reaction at these levels gives a precise read on institutional intent intraday.
Support/Resistance from ORG
-If the session opens above prior close, the gap often acts as support until violated.
-If the session opens below prior close, the gap often acts as resistance until reclaimed.
Key ICT Concepts Anchored to ORG
Consequent Encroachment (CE): The midpoint of the gap. The algo is highly sensitive to CE as a decision point: reject → continuation; reclaim → reversal.
Draw on Liquidity (DoL): Price is algorithmically “pulled” toward gap fills, CE, or the opposite side of the ORG.
Order Flow Confirmation: If price ignores the gap and runs away from it, this signals strong institutional order flow in that direction.
Confluence with Other Tools: FVGs, OBs, and HTF PD arrays often overlap with ORG levels, strengthening setups.
Practical Application for Traders
Bias Formation:
Use ORG EQ as a line in the sand for intraday bias.
If price trades below ORG EQ after the open → look for short setups into the prior day’s low or external liquidity.
If price trades above ORG EQ → favor longs into highs/liquidity pools.
Execution Framework:
Wait for liquidity raids or market structure shifts at ORG edges (.00, .25, .50, .75).
Target: EQ, opposite quarter, or full gap fill.
Precision Reads:
ORG lines let traders anticipate where algorithms are likely to respond, providing mechanical invalidation and clear targets without clutter.
Institutional Levels (CNN) - [PhenLabs]📊Institutional Levels (Convolutional Neural Network-inspired)
Version : PineScript™v6
📌Description
The CNN-IL Institutional Levels indicator represents a breakthrough in automated zone detection technology, combining convolutional neural network principles with advanced statistical modeling. This sophisticated tool identifies high-probability institutional trading zones by analyzing pivot patterns, volume dynamics, and price behavior using machine learning algorithms.
The indicator employs a proprietary 9-factor logistic regression model that calculates real-time reaction probabilities for each detected zone. By incorporating CNN-inspired filtering techniques and dynamic zone management, it provides traders with unprecedented accuracy in identifying where institutional money is likely to react to price action.
🚀Points of Innovation
● CNN-Inspired Pivot Analysis - Advanced binning system using convolutional neural network principles for superior pattern recognition
● Real-Time Probability Engine - Live reaction probability calculations using 9-factor logistic regression model
● Dynamic Zone Intelligence - Automatic zone merging using Intersection over Union (IoU) algorithms
● Volume-Weighted Scoring - Time-of-day volume Z-score analysis for enhanced zone strength assessment
● Adaptive Decay System - Intelligent zone lifecycle management based on touch frequency and recency
● Multi-Filter Architecture - Optional gradient, smoothing, and Difference of Gaussians (DoG) convolution filters
🔧Core Components
● Pivot Detection Engine - Advanced pivot identification with configurable left/right bars and ATR-normalized strength calculations
● Neural Network Binning - Price level clustering using CNN-inspired algorithms with ATR-based bin sizing
● Logistic Regression Model - 9-factor probability calculation including distance, width, volume, VWAP deviation, and trend analysis
● Zone Management System - Intelligent creation, merging, and decay algorithms for optimal zone lifecycle control
● Visualization Layer - Dynamic line drawing with opacity-based scoring and optional zone fills
🔥Key Features
● High-Probability Zone Detection - Automatically identifies institutional levels with reaction probabilities above configurable thresholds
● Real-Time Probability Scoring - Live calculation of zone reaction likelihood using advanced statistical modeling
● Session-Aware Analysis - Optional filtering to specific trading sessions for enhanced accuracy during active market hours
● Customizable Parameters - Full control over lookback periods, zone sensitivity, merge thresholds, and probability models
● Performance Optimized - Efficient processing with controlled update frequencies and pivot processing limits
● Non-Repainting Mode - Strict mode available for backtesting accuracy and live trading reliability
🎨Visualization
● Dynamic Zone Lines - Color-coded support and resistance levels with opacity reflecting zone strength and confidence scores
● Probability Labels - Real-time display of reaction probabilities, touch counts, and historical hit rates for active zones
● Zone Fills - Optional semi-transparent zone highlighting for enhanced visual clarity and immediate pattern recognition
● Adaptive Styling - Automatic color and opacity adjustments based on zone scoring and statistical significance
📖Usage Guidelines
● Lookback Bars - Default 500, Range 100-1000, Controls the historical data window for pivot analysis and zone calculation
● Pivot Left/Right - Default 3, Range 1-10, Defines the pivot detection sensitivity and confirmation requirements
● Bin Size ATR units - Default 0.25, Range 0.1-2.0, Controls price level clustering granularity for zone creation
● Base Zone Half-Width ATR units - Default 0.25, Range 0.1-1.0, Sets the minimum zone width in ATR units for institutional level boundaries
● Zone Merge IoU Threshold - Default 0.5, Range 0.1-0.9, Intersection over Union threshold for automatic zone merging algorithms
● Max Active Zones - Default 5, Range 3-20, Maximum number of zones displayed simultaneously to prevent chart clutter
● Probability Threshold for Labels - Default 0.6, Range 0.3-0.9, Minimum reaction probability required for zone label display and alerts
● Distance Weight w1 - Controls influence of price distance from zone center on reaction probability
● Width Weight w2 - Adjusts impact of zone width on probability calculations
● Volume Weight w3 - Modifies volume Z-score influence on zone strength assessment
● VWAP Weight w4 - Controls VWAP deviation impact on institutional level significance
● Touch Count Weight w5 - Adjusts influence of historical zone interactions on probability scoring
● Hit Rate Weight w6 - Controls prior success rate impact on future reaction likelihood predictions
● Wick Penetration Weight w7 - Modifies wick penetration analysis influence on probability calculations
● Trend Weight w8 - Adjusts trend context impact using ADX analysis for directional bias assessment
✅Best Use Cases
● Swing Trading Entries - Enter positions at high-probability institutional zones with 60%+ reaction scores
● Scalping Opportunities - Quick entries and exits around frequently tested institutional levels
● Risk Management - Use zones as dynamic stop-loss and take-profit levels based on institutional behavior
● Market Structure Analysis - Identify key institutional levels that define current market structure and sentiment
● Confluence Trading - Combine with other technical indicators for high-probability trade setups
● Session-Based Strategies - Focus analysis during high-volume sessions for maximum effectiveness
⚠️Limitations
● Historical Pattern Dependency - Algorithm effectiveness relies on historical patterns that may not repeat in changing market conditions
● Computational Intensity - Complex calculations may impact chart performance on lower-end devices or with multiple indicators
● Probability Estimates - Reaction probabilities are statistical estimates and do not guarantee actual market outcomes
● Session Sensitivity - Performance may vary significantly between different market sessions and volatility regimes
● Parameter Sensitivity - Results can be highly dependent on input parameters requiring optimization for different instruments
💡What Makes This Unique
● CNN Architecture - First indicator to apply convolutional neural network principles to institutional-level detection
● Real-Time ML Scoring - Live machine learning probability calculations for each zone interaction
● Advanced Zone Management - Sophisticated algorithms for zone lifecycle management and automatic optimization
● Statistical Rigor - Comprehensive 9-factor logistic regression model with extensive backtesting validation
● Performance Optimization - Efficient processing algorithms designed for real-time trading applications
🔬How It Works
● Multi-timeframe pivot identification - Uses configurable sensitivity parameters for advanced pivot detection
● ATR-normalized strength calculations - Standardizes pivot significance across different volatility regimes
● Volume Z-score integration - Enhanced pivot weighting based on time-of-day volume patterns
● Price level clustering - Neural network binning algorithms with ATR-based sizing for zone creation
● Recency decay applications - Weights recent pivots more heavily than historical data for relevance
● Statistical filtering - Eliminates low-significance price levels and reduces market noise
● Dynamic zone generation - Creates zones from statistically significant pivot clusters with minimum support thresholds
● IoU-based merging algorithms - Combines overlapping zones while maintaining accuracy using Intersection over Union
● Adaptive decay systems - Automatic removal of outdated or low-performing zones for optimal performance
● 9-factor logistic regression - Incorporates distance, width, volume, VWAP, touch history, and trend analysis
● Real-time scoring updates - Zone interaction calculations with configurable threshold filtering
● Optional CNN filters - Gradient detection, smoothing, and Difference of Gaussians processing for enhanced accuracy
💡Note
This indicator represents advanced quantitative analysis and should be used by traders familiar with statistical modeling concepts. The probability scores are mathematical estimates based on historical patterns and should be combined with proper risk management and additional technical analysis for optimal trading decisions.
[blackcat] L2 Trend LinearityOVERVIEW
The L2 Trend Linearity indicator is a sophisticated market analysis tool designed to help traders identify and visualize market trend linearity by analyzing price action relative to dynamic support and resistance zones. This powerful Pine Script indicator utilizes the Arnaud Legoux Moving Average (ALMA) algorithm to calculate weighted price calculations and generate dynamic support/resistance zones that adapt to changing market conditions. By visualizing market zones through colored candles and histograms, the indicator provides clear visual cues about market momentum and potential trading opportunities. The script generates buy/sell signals based on zone crossovers, making it an invaluable tool for both technical analysis and automated trading strategies. Whether you're a day trader, swing trader, or algorithmic trader, this indicator can help you identify market regimes, support/resistance levels, and potential entry/exit points with greater precision.
FEATURES
Dynamic Support/Resistance Zones: Calculates dynamic support (bear market zone) and resistance (bull market zone) using weighted price calculations and ALMA smoothing
Visual Market Representation: Color-coded candles and histograms provide immediate visual feedback about market conditions
Smart Signal Generation: Automatic buy/sell signals generated from zone crossovers with clear visual indicators
Customizable Parameters: Four different ALMA smoothing parameters for various timeframes and trading styles
Multi-Timeframe Compatibility: Works across different timeframes from 1-minute to weekly charts
Real-time Analysis: Provides instant feedback on market momentum and trend direction
Clear Visual Cues: Green candles indicate bullish momentum, red candles indicate bearish momentum, and white candles indicate neutral conditions
Histogram Visualization: Blue histogram shows bear market zone (below support), aqua histogram shows bull market zone (above resistance)
Signal Labels: "B" labels mark buy signals (price crosses above resistance), "S" labels mark sell signals (price crosses below support)
Overlay Functionality: Works as an overlay indicator without cluttering the chart with unnecessary elements
Highly Customizable: All parameters can be adjusted to suit different trading strategies and market conditions
HOW TO USE
Add the Indicator to Your Chart
Open TradingView and navigate to your desired trading instrument
Click on "Indicators" in the top menu and select "New"
Search for "L2 Trend Linearity" or paste the Pine Script code
Click "Add to Chart" to apply the indicator
Configure the Parameters
ALMA Length Short: Set the short-term smoothing parameter (default: 3). Lower values provide more responsive signals but may generate more false signals
ALMA Length Medium: Set the medium-term smoothing parameter (default: 5). This provides a balance between responsiveness and stability
ALMA Length Long: Set the long-term smoothing parameter (default: 13). Higher values provide more stable signals but with less responsiveness
ALMA Length Very Long: Set the very long-term smoothing parameter (default: 21). This provides the most stable support/resistance levels
Understand the Visual Elements
Green Candles: Indicate bullish momentum when price is above the bear market zone (support)
Red Candles: Indicate bearish momentum when price is below the bull market zone (resistance)
White Candles: Indicate neutral market conditions when price is between support and resistance zones
Blue Histogram: Shows bear market zone when price is below support level
Aqua Histogram: Shows bull market zone when price is above resistance level
"B" Labels: Mark buy signals when price crosses above resistance
"S" Labels: Mark sell signals when price crosses below support
Identify Market Regimes
Bullish Regime: Price consistently above resistance zone with green candles and aqua histogram
Bearish Regime: Price consistently below support zone with red candles and blue histogram
Neutral Regime: Price oscillating between support and resistance zones with white candles
Generate Trading Signals
Buy Signals: Look for price crossing above the bull market zone (resistance) with confirmation from green candles
Sell Signals: Look for price crossing below the bear market zone (support) with confirmation from red candles
Confirmation: Always wait for confirmation from candle color changes before entering trades
Optimize for Different Timeframes
Scalping: Use shorter ALMA lengths (3-5) for 1-5 minute charts
Day Trading: Use medium ALMA lengths (5-13) for 15-60 minute charts
Swing Trading: Use longer ALMA lengths (13-21) for 1-4 hour charts
Position Trading: Use very long ALMA lengths (21+) for daily and weekly charts
LIMITATIONS
Whipsaw Markets: The indicator may generate false signals in choppy, sideways markets where price oscillates rapidly between support and resistance
Lagging Nature: Like all moving average-based indicators, there is inherent lag in the calculations, which may result in delayed signals
Not a Standalone Tool: This indicator should be used in conjunction with other technical analysis tools and risk management strategies
Market Structure Dependency: Performance may vary depending on market structure and volatility conditions
Parameter Sensitivity: Different markets may require different parameter settings for optimal performance
No Volume Integration: The indicator does not incorporate volume data, which could provide additional confirmation signals
Limited Backtesting: Pine Script limitations may restrict comprehensive backtesting capabilities
Not Suitable for All Instruments: May perform differently on stocks, forex, crypto, and futures markets
Requires Confirmation: Signals should always be confirmed with other indicators or price action analysis
Not Predictive: The indicator identifies current market conditions but does not predict future price movements
NOTES
ALMA Algorithm: The indicator uses the Arnaud Legoux Moving Average (ALMA) algorithm, which is known for its excellent smoothing capabilities and reduced lag compared to traditional moving averages
Weighted Price Calculations: The bear market zone uses (2low + close) / 3, while the bull market zone uses (high + 2close) / 3, providing more weight to recent price action
Dynamic Zones: The support and resistance zones are dynamic and adapt to changing market conditions, making them more responsive than static levels
Color Psychology: The color scheme follows traditional trading psychology - green for bullish, red for bearish, and white for neutral
Signal Timing: The signals are generated on the close of each bar, ensuring they are based on complete price action
Label Positioning: Buy signals appear below the bar (red "B" label), while sell signals appear above the bar (green "S" label)
Multiple Timeframes: The indicator can be applied to multiple timeframes simultaneously for comprehensive analysis
Risk Management: Always use proper risk management techniques when trading based on indicator signals
Market Context: Consider the overall market context and trend direction when interpreting signals
Confirmation: Look for confirmation from other indicators or price action patterns before entering trades
Practice: Test the indicator on historical data before using it in live trading
Customization: Feel free to experiment with different parameter combinations to find what works best for your trading style
THANKS
Special thanks to the TradingView community and the Pine Script developers for creating such a powerful and flexible platform for technical analysis. This indicator builds upon the foundation of the ALMA algorithm and various moving average techniques developed by technical analysis pioneers. The concept of dynamic support and resistance zones has been refined over decades of market analysis, and this script represents a modern implementation of these timeless principles. We acknowledge the contributions of all traders and developers who have contributed to the evolution of technical analysis and continue to push the boundaries of what's possible with algorithmic trading tools.
Risk-Adjusted Momentum Oscillator# Risk-Adjusted Momentum Oscillator (RAMO): Momentum Analysis with Integrated Risk Assessment
## 1. Introduction
Momentum indicators have been fundamental tools in technical analysis since the pioneering work of Wilder (1978) and continue to play crucial roles in systematic trading strategies (Jegadeesh & Titman, 1993). However, traditional momentum oscillators suffer from a critical limitation: they fail to account for the risk context in which momentum signals occur. This oversight can lead to significant drawdowns during periods of market stress, as documented extensively in the behavioral finance literature (Kahneman & Tversky, 1979; Shefrin & Statman, 1985).
The Risk-Adjusted Momentum Oscillator addresses this gap by incorporating real-time drawdown metrics into momentum calculations, creating a self-regulating system that automatically adjusts signal sensitivity based on current risk conditions. This approach aligns with modern portfolio theory's emphasis on risk-adjusted returns (Markowitz, 1952) and reflects the sophisticated risk management practices employed by institutional investors (Ang, 2014).
## 2. Theoretical Foundation
### 2.1 Momentum Theory and Market Anomalies
The momentum effect, first systematically documented by Jegadeesh & Titman (1993), represents one of the most robust anomalies in financial markets. Subsequent research has confirmed momentum's persistence across various asset classes, time horizons, and geographic markets (Fama & French, 1996; Asness, Moskowitz & Pedersen, 2013). However, momentum strategies are characterized by significant time-varying risk, with particularly severe drawdowns during market reversals (Barroso & Santa-Clara, 2015).
### 2.2 Drawdown Analysis and Risk Management
Maximum drawdown, defined as the peak-to-trough decline in portfolio value, serves as a critical risk metric in professional portfolio management (Calmar, 1991). Research by Chekhlov, Uryasev & Zabarankin (2005) demonstrates that drawdown-based risk measures provide superior downside protection compared to traditional volatility metrics. The integration of drawdown analysis into momentum calculations represents a natural evolution toward more sophisticated risk-aware indicators.
### 2.3 Adaptive Smoothing and Market Regimes
The concept of adaptive smoothing in technical analysis draws from the broader literature on regime-switching models in finance (Hamilton, 1989). Perry Kaufman's Adaptive Moving Average (1995) pioneered the application of efficiency ratios to adjust indicator responsiveness based on market conditions. RAMO extends this concept by incorporating volatility-based adaptive smoothing, allowing the indicator to respond more quickly during high-volatility periods while maintaining stability during quiet markets.
## 3. Methodology
### 3.1 Core Algorithm Design
The RAMO algorithm consists of several interconnected components:
#### 3.1.1 Risk-Adjusted Momentum Calculation
The fundamental innovation of RAMO lies in its risk adjustment mechanism:
Risk_Factor = 1 - (Current_Drawdown / Maximum_Drawdown × Scaling_Factor)
Risk_Adjusted_Momentum = Raw_Momentum × max(Risk_Factor, 0.05)
This formulation ensures that momentum signals are dampened during periods of high drawdown relative to historical maximums, implementing an automatic risk management overlay as advocated by modern portfolio theory (Markowitz, 1952).
#### 3.1.2 Multi-Algorithm Momentum Framework
RAMO supports three distinct momentum calculation methods:
1. Rate of Change: Traditional percentage-based momentum (Pring, 2002)
2. Price Momentum: Absolute price differences
3. Log Returns: Logarithmic returns preferred for volatile assets (Campbell, Lo & MacKinlay, 1997)
This multi-algorithm approach accommodates different asset characteristics and volatility profiles, addressing the heterogeneity documented in cross-sectional momentum studies (Asness et al., 2013).
### 3.2 Leading Indicator Components
#### 3.2.1 Momentum Acceleration Analysis
The momentum acceleration component calculates the second derivative of momentum, providing early signals of trend changes:
Momentum_Acceleration = EMA(Momentum_t - Momentum_{t-n}, n)
This approach draws from the physics concept of acceleration and has been applied successfully in financial time series analysis (Treadway, 1969).
#### 3.2.2 Linear Regression Prediction
RAMO incorporates linear regression-based prediction to project momentum values forward:
Predicted_Momentum = LinReg_Value + (LinReg_Slope × Forward_Offset)
This predictive component aligns with the literature on technical analysis forecasting (Lo, Mamaysky & Wang, 2000) and provides leading signals for trend changes.
#### 3.2.3 Volume-Based Exhaustion Detection
The exhaustion detection algorithm identifies potential reversal points by analyzing the relationship between momentum extremes and volume patterns:
Exhaustion = |Momentum| > Threshold AND Volume < SMA(Volume, 20)
This approach reflects the established principle that sustainable price movements require volume confirmation (Granville, 1963; Arms, 1989).
### 3.3 Statistical Normalization and Robustness
RAMO employs Z-score normalization with outlier protection to ensure statistical robustness:
Z_Score = (Value - Mean) / Standard_Deviation
Normalized_Value = max(-3.5, min(3.5, Z_Score))
This normalization approach follows best practices in quantitative finance for handling extreme observations (Taleb, 2007) and ensures consistent signal interpretation across different market conditions.
### 3.4 Adaptive Threshold Calculation
Dynamic thresholds are calculated using Bollinger Band methodology (Bollinger, 1992):
Upper_Threshold = Mean + (Multiplier × Standard_Deviation)
Lower_Threshold = Mean - (Multiplier × Standard_Deviation)
This adaptive approach ensures that signal thresholds adjust to changing market volatility, addressing the critique of fixed thresholds in technical analysis (Taylor & Allen, 1992).
## 4. Implementation Details
### 4.1 Adaptive Smoothing Algorithm
The adaptive smoothing mechanism adjusts the exponential moving average alpha parameter based on market volatility:
Volatility_Percentile = Percentrank(Volatility, 100)
Adaptive_Alpha = Min_Alpha + ((Max_Alpha - Min_Alpha) × Volatility_Percentile / 100)
This approach ensures faster response during volatile periods while maintaining smoothness during stable conditions, implementing the adaptive efficiency concept pioneered by Kaufman (1995).
### 4.2 Risk Environment Classification
RAMO classifies market conditions into three risk environments:
- Low Risk: Current_DD < 30% × Max_DD
- Medium Risk: 30% × Max_DD ≤ Current_DD < 70% × Max_DD
- High Risk: Current_DD ≥ 70% × Max_DD
This classification system enables conditional signal generation, with long signals filtered during high-risk periods—a approach consistent with institutional risk management practices (Ang, 2014).
## 5. Signal Generation and Interpretation
### 5.1 Entry Signal Logic
RAMO generates enhanced entry signals through multiple confirmation layers:
1. Primary Signal: Crossover between indicator and signal line
2. Risk Filter: Confirmation of favorable risk environment for long positions
3. Leading Component: Early warning signals via acceleration analysis
4. Exhaustion Filter: Volume-based reversal detection
This multi-layered approach addresses the false signal problem common in traditional technical indicators (Brock, Lakonishok & LeBaron, 1992).
### 5.2 Divergence Analysis
RAMO incorporates both traditional and leading divergence detection:
- Traditional Divergence: Price and indicator divergence over 3-5 periods
- Slope Divergence: Momentum slope versus price direction
- Acceleration Divergence: Changes in momentum acceleration
This comprehensive divergence analysis framework draws from Elliott Wave theory (Prechter & Frost, 1978) and momentum divergence literature (Murphy, 1999).
## 6. Empirical Advantages and Applications
### 6.1 Risk-Adjusted Performance
The risk adjustment mechanism addresses the fundamental criticism of momentum strategies: their tendency to experience severe drawdowns during market reversals (Daniel & Moskowitz, 2016). By automatically reducing position sizing during high-drawdown periods, RAMO implements a form of dynamic hedging consistent with portfolio insurance concepts (Leland, 1980).
### 6.2 Regime Awareness
RAMO's adaptive components enable regime-aware signal generation, addressing the regime-switching behavior documented in financial markets (Hamilton, 1989; Guidolin, 2011). The indicator automatically adjusts its parameters based on market volatility and risk conditions, providing more reliable signals across different market environments.
### 6.3 Institutional Applications
The sophisticated risk management overlay makes RAMO particularly suitable for institutional applications where drawdown control is paramount. The indicator's design philosophy aligns with the risk budgeting approaches used by hedge funds and institutional investors (Roncalli, 2013).
## 7. Limitations and Future Research
### 7.1 Parameter Sensitivity
Like all technical indicators, RAMO's performance depends on parameter selection. While default parameters are optimized for broad market applications, asset-specific calibration may enhance performance. Future research should examine optimal parameter selection across different asset classes and market conditions.
### 7.2 Market Microstructure Considerations
RAMO's effectiveness may vary across different market microstructure environments. High-frequency trading and algorithmic market making have fundamentally altered market dynamics (Aldridge, 2013), potentially affecting momentum indicator performance.
### 7.3 Transaction Cost Integration
Future enhancements could incorporate transaction cost analysis to provide net-return-based signals, addressing the implementation shortfall documented in practical momentum strategy applications (Korajczyk & Sadka, 2004).
## References
Aldridge, I. (2013). *High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems*. 2nd ed. Hoboken, NJ: John Wiley & Sons.
Ang, A. (2014). *Asset Management: A Systematic Approach to Factor Investing*. New York: Oxford University Press.
Arms, R. W. (1989). *The Arms Index (TRIN): An Introduction to the Volume Analysis of Stock and Bond Markets*. Homewood, IL: Dow Jones-Irwin.
Asness, C. S., Moskowitz, T. J., & Pedersen, L. H. (2013). Value and momentum everywhere. *Journal of Finance*, 68(3), 929-985.
Barroso, P., & Santa-Clara, P. (2015). Momentum has its moments. *Journal of Financial Economics*, 116(1), 111-120.
Bollinger, J. (1992). *Bollinger on Bollinger Bands*. New York: McGraw-Hill.
Brock, W., Lakonishok, J., & LeBaron, B. (1992). Simple technical trading rules and the stochastic properties of stock returns. *Journal of Finance*, 47(5), 1731-1764.
Calmar, T. (1991). The Calmar ratio: A smoother tool. *Futures*, 20(1), 40.
Campbell, J. Y., Lo, A. W., & MacKinlay, A. C. (1997). *The Econometrics of Financial Markets*. Princeton, NJ: Princeton University Press.
Chekhlov, A., Uryasev, S., & Zabarankin, M. (2005). Drawdown measure in portfolio optimization. *International Journal of Theoretical and Applied Finance*, 8(1), 13-58.
Daniel, K., & Moskowitz, T. J. (2016). Momentum crashes. *Journal of Financial Economics*, 122(2), 221-247.
Fama, E. F., & French, K. R. (1996). Multifactor explanations of asset pricing anomalies. *Journal of Finance*, 51(1), 55-84.
Granville, J. E. (1963). *Granville's New Key to Stock Market Profits*. Englewood Cliffs, NJ: Prentice-Hall.
Guidolin, M. (2011). Markov switching models in empirical finance. In D. N. Drukker (Ed.), *Missing Data Methods: Time-Series Methods and Applications* (pp. 1-86). Bingley: Emerald Group Publishing.
Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. *Econometrica*, 57(2), 357-384.
Jegadeesh, N., & Titman, S. (1993). Returns to buying winners and selling losers: Implications for stock market efficiency. *Journal of Finance*, 48(1), 65-91.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. *Econometrica*, 47(2), 263-291.
Kaufman, P. J. (1995). *Smarter Trading: Improving Performance in Changing Markets*. New York: McGraw-Hill.
Korajczyk, R. A., & Sadka, R. (2004). Are momentum profits robust to trading costs? *Journal of Finance*, 59(3), 1039-1082.
Leland, H. E. (1980). Who should buy portfolio insurance? *Journal of Finance*, 35(2), 581-594.
Lo, A. W., Mamaysky, H., & Wang, J. (2000). Foundations of technical analysis: Computational algorithms, statistical inference, and empirical implementation. *Journal of Finance*, 55(4), 1705-1765.
Markowitz, H. (1952). Portfolio selection. *Journal of Finance*, 7(1), 77-91.
Murphy, J. J. (1999). *Technical Analysis of the Financial Markets: A Comprehensive Guide to Trading Methods and Applications*. New York: New York Institute of Finance.
Prechter, R. R., & Frost, A. J. (1978). *Elliott Wave Principle: Key to Market Behavior*. Gainesville, GA: New Classics Library.
Pring, M. J. (2002). *Technical Analysis Explained: The Successful Investor's Guide to Spotting Investment Trends and Turning Points*. 4th ed. New York: McGraw-Hill.
Roncalli, T. (2013). *Introduction to Risk Parity and Budgeting*. Boca Raton, FL: CRC Press.
Shefrin, H., & Statman, M. (1985). The disposition to sell winners too early and ride losers too long: Theory and evidence. *Journal of Finance*, 40(3), 777-790.
Taleb, N. N. (2007). *The Black Swan: The Impact of the Highly Improbable*. New York: Random House.
Taylor, M. P., & Allen, H. (1992). The use of technical analysis in the foreign exchange market. *Journal of International Money and Finance*, 11(3), 304-314.
Treadway, A. B. (1969). On rational entrepreneurial behavior and the demand for investment. *Review of Economic Studies*, 36(2), 227-239.
Wilder, J. W. (1978). *New Concepts in Technical Trading Systems*. Greensboro, NC: Trend Research.
TAPDA Hourly Open Lines (Candle Body Box)-What is TAPDA?
TAPDA (Time and Price Displacement Analysis) is based on the belief that markets are driven by algorithms that respond to key time-based price levels, such as session opens. Traders who follow TAPDA track these levels to anticipate price movements, reversals, and breakouts, aligning their strategies with the patterns left by these underlying algorithms. By plotting lines at specific hourly opens, the indicator allows traders to visualize where the market may react, providing a structured way to trade alongside the algorithmic flow.
***************
**Sauce Alert** "TAPDA levels essentially act like algorithmic support and resistance" By plotting these hourly opens, the TAPDA Hourly Open Lines indicator helps traders track where algorithms might engage with the market.
***************
-How It Works:
The indicator draws a "candle body box" at selected hours, marking the open and close prices to highlight price ranges at significant times. This creates dynamic zones that reflect market sentiment and structure throughout the day. TAPDA levels are commonly respected by price, making them useful for identifying potential entry points, stop placements, and trend reversals.
-Key Features:
Customizable Hour Levels – Enable or disable specific times to fit your trading approach.
Color & Label Control – Assign unique colors and labels to each hour for better visualization.
Line Extension – Project lines for up to 24 hours into the future to track key levels.
Dynamic Cleanup – Old lines automatically delete to maintain chart clarity.
Manual Time Offset – Adjust for broker or server time zone differences.
-Current Development:
This indicator is still in development, with further updates planned to enhance functionality and customization. If you find this script helpful, feel free to copy the code and stay tuned for new features and improvements!
[Pandora] Error Function Treasure Trove - ERF/ERFI/Sigmoids+PRAISE:
At this time, I have to graciously thank the wonderful minds behind the new "Pine Profiler Mode" (PPM). Directly prior to this release, it allowed me to ascertain script performance even more. While I usually write mostly in highly optimized Pine code, PPM visually identified a few bottlenecks that would otherwise be hard to identify. Anyone who contributed to PPMs creation and testing before release... BRAVO!!! I commend all of those who assisted in it's state-of-the-art engineering and inception, well done!
BACKSTORY:
This script is specifically being released in defense of another member, an exceptionally unique PhD. It was brought to my attention that a script-mod-event occurred, regarding the publishing of a measly antiquated error function (ERF) calculation within his script. This sadly resulted in the now former member jumping ship after receiving unmannerly responses amidst his curious inquiries as to why his erf() was modded. To forbid rusty and rudimentary formulations because a mod-on-duty is temporally offended by a non-nefarious release of code, is in MY opinion an injustice to principles of perpetuating open-source code intended to benefit thousands to millions of community members. While Pine is the heart and soul of TV, the mathematical concepts contributed from the minds of members is the inspirational fuel of curiosity that powers it's pertinent reason to exist and evolve.
It is an indisputable fact that most members are not greatly skilled Pine Poets. Many members may be incapable of innovating robust function code in Pine, even if they have one or more PhDs. We ALL come from various disciplines of mathematical comprehension and education. Some mathematicians are not greatly skilled at coding, while some coders are not exceptional at math. So... what am I to do to attempt to resolve this circumstantial challenge??? Those who know me best are aware that I will always side with "the right side of history" in order to accomplish my primary self-defined missions I choose to accept. Serving as an algorithmic advocate, I felt compelled to intercede by compiling numerous error functions into elegant code of very high caliber that any and every TV member may choose to employ, so this ERROR never happens again.
After weeks of contemplation into algorithms I knew little about, I prioritized myself to resolve an unanticipated matter by creating advanced formulas of exquisitely crafted error functions refined to the best of my current abilities. My aversion for unresolved problems motivated me to eviscerate error function insufficiencies with many more rigid formulations beyond what is thought to exist. ERF needed a proper algorithmic exorcism anyways. In my furiosity, I contemplated an array of madMAXimum diplomatic demolition methods, choosing the chain saw massacre technique to slaughter dysfunctionalities I encountered on a battered ERF roadway. This resulted in prolific solutions that should assuredly endure the test of time. Poetically, as you will come to see, I am ripping the lid off of Pandora's box of error functions in this case to correct wrongs into a splendid bundle of rights for members.
INTENTION:
Error function (ERF) enthusiasts... PREPARE FOR GLORY!! The specific purpose of this script is to deprecate classic error functions with the creation of a fierce and formidable army of superior formulations, each having varying attributes of computational complexity with differing absolute error ranges in their results for multiple compute scenarios. This is NOT an indicator... It is intended to allow members to embark on endeavors to advance the profound knowledge base of this growing worldwide community of 60+ million inquisitive minds. For those of you who believe computational mathematics and statistics is near completion at its finest; I am here to inform you, this is ridiculous to ponder. We are no where near statistical excellence that can and will exist eventually. At this time, metaphorically speaking, we are merely scratching microns off of the surface of the skin of a statistical apple Isaac Newton once pondered.
THIS RELEASE:
Following weeks of pondering methodical experiments beyond the ordinary, I am liberating these wild notions of my error function explorations to the entire globe as copyleft code, not just Pine. This Pandora's basket of ERFs is being openly disclosed for the sake of the sanctity of mathematics, empirical science (not the garbage we are told by CONTROLocrats to blindly trust), revolutionary cutting edge engineering, cosmology, physics, information technology, artificial intelligence, and EVERY other mathematical branch of human knowledge being discovered over centuries. I do believe James Glaisher would favor my aims concerning ERF aspirations embracing the "Power of Pine".
The included functions are intended for TV members to use in any way they see fit. This is a gift to ALL members to foster future innovative excellence on this platform. Any attempt to moderate this code without notification of "self-evident clear and just cause" will be considered an irrevocable egregious action. The original foundational PURPOSE of establishing script moderation (I clearly remember) was primarily to maintain active vigilance over a growing community against intentional nefarious actions and/or behaviors in blatant disrespect to other author's works AND also thwart rampant copypasting bandit operations, all while accommodating balanced principles of fairness for an educational community cause via open source publishing that should support future algorithmic inventions well beyond my lifespan.
APPLICATIONS:
The related error functions are used in probability theory, statistics, and numerous and engineering scientific disciplines. Its key characteristics and applications are innumerable in computational realms. Its versatility and significance make it a fundamental tool in arenas of quantitative analysis and scientific research...
Probability Theory - Is widely used in probability theory to calculate probabilities and quantiles of the normal distribution.
Statistics - It's related to the Gaussian integral and plays a crucial role in statistics, especially in hypothesis testing and confidence interval calculations.
Physics - In physics, it arises in the study of diffusion equations, quantum mechanics, and heat conduction problems.
Engineering - Applications exist in engineering disciplines such as signal processing, control theory, and telecommunications.
Error Analysis - It's employed in error analysis and uncertainty quantification.
Numeric Approximations - Due to its lack of a closed-form expression, numerical methods are often employed to approximate erf/erfi().
AI, LLMs, & MACHINE LEARNING:
The error function (ERF) is indispensable to various AI applications, particularly due to its relation to Gaussian distributions and error analysis. It is used in Gaussian processes for regression and classification, probabilistic inference for Bayesian networks, soft margin computation in SVMs, neural networks involving Gaussian activation functions or noise, and clustering algorithms like Gaussian Mixture Models. Improved ERF approximations can enhance precision in these applications, reduce computational complexity, handle outliers and noise better, and improve optimization and convergence, possibly leading to more accurate, efficient, and robust AI systems.
BONUS ALGORITHMS:
While ERFs are versatile, its opposite also exists in the form of inverse error functions (ERFIs). I have also included a modified form of the inverse fisher transform along side MY sigmoid (sigmyod). I am uncertain what sigmyod() may be used for, but it's a culmination of my examinations deep into "sigmoid domains", something I am fascinated by. Whatever implications it may possess, I am unveiling it along with it's cousin functions. For curious minds, this quality of composition seen here is ideally what underlies what I would term "Pandora functionality" that empowers my Pandora indication. I go through hordes of formulations, testing, and inspection to find what appears to be the most beneficial logical/mathematical equation to apply...
SCRIPT OPERATION:
To showcase the characteristics and performance of my ERF/ERFI formulations, I devised a multi-modal script. By using bar_index , I generated a broad sequence of numeric values to input into the first ERF/ERFI parameter. These sequences allow you to inspect the contours of the error function's outputs for both ERF and ERFI. When combined with compute-intensive precision functions (CIPFs), the polynomial function output values can be subtracted from my CIPFs to obtain results of absolute error, displaying the accuracy of the many polynomial estimation functions I tuned in testing for Pine's float environment.
A host of numeric input settings are wildly adjustable to inspect values/curvatures across the range of numeric input sequences. Very large numbers, such as Divisor:100,000,100/Offset:200,000,000 for ERF modes or... Divisor:100,000,100/Offset:100,000,000 for ERFI modes, will display miniscule output values calculated from input values in close proximity to 0.0 for the various estimates, similar to a microscope. ERFI approximations very near in proximity to +/-1.0 will always yield large deviations of absolute error. Dragging/zooming your chart or using the Offset input will aid with visually clipping off those ERFI extremes where float precision functions cannot suffice.
NOTICE:
perf() and perfi() are intended for precision computation (as good as it basically gets) in a float environment. However, they are CPU intensive (especially perfi). I wouldn't recommend these being used in ANY Pine script unless it's an "absolute necessity" to do so to accomplish your goal. I only built them to obtain "absolute error curvatures" of the error functions for the polynomial approximations. These are visible in the accuracy modes in the indicator Settings.
Bogdan Ciocoiu - Code runnerDescription
The Code Runner is a hybrid indicator that leverages other pre-configured, integrated open-source algorithms to help traders spot regular and continuation divergences.
The Code Runner specialises in integrating some of the most popular oscillators well known for their accuracy when scalping using divergence strategies.
Uniqueness
The Code Runner stands out as a one-stop-shop pack of oscillator algorithms that traders can further customise to spot divergences.
The indicator's uniqueness stands from its capability to recast each algorithm to apply to the same scale. This feature is achieved by manually adjusting the outputs of each algorithm to fit on a scale between +100 and -100.
Another benefit of the Code Runner comes from its standardisation of outputs, mainly consisting of lines. Showing lines enables traders to draw potential regular and continuation divergences quickly.
The indicator has been pre-configured to support scalping at 1-5 minutes.
Open-source
The Code Runner uses the following open-source scripts and algorithms:
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
These algorithms are available in the public domain either in TradingView space or outside (given their popularity in the financial markets industry).
Adaptive Average Vortex Index [lastguru]As a longtime fan of ADX, looking at Vortex Indicator I often wondered, where is the third line. I have rarely seen that anybody is calculating it. So, here it is: Average Vortex Index - an ADX calculated from Vortex Indicator. I interpret it similarly to the ADX indicator: higher values show stronger trend. If you discover other interpretation or have suggestions, comments are welcome.
Both VI+ and VI- lines are also drawn. As I use adaptive length calculation in my other scripts (based on the libraries I've developed and published), I have also included the possibility to have an adaptive length here, so if you hate the idea of calculating ADX from VI, you can disable that line and just look at the adaptive Vortex Indicator.
Note that as with all my oscillators, all the lines here are renormalized to -1..1 range unlike the original Vortex Indicator computation. To do that for VI+ and VI- lines, I subtract 1 from their values. It does not change the shape or the amplitude of the lines.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers . I do not know, which combination works best, so you can experiment.
If no Adaptation is selected ( None option), you can set Length directly. If an Adaptation is selected, then Cycle multiplier can be set.
The oscillator also has the option to configure the internal smoothing function with Window setting. By default, RMA is used (like in ADX calculation). Fast Default option is using half the length for smoothing. Triangle , Hamming and Hann Window algorithms are some better smoothers suggested by John F. Ehlers.
After the oscillator a Moving Average can be applied. The following Moving Averages are included: SMA , RMA, EMA , HMA , VWMA , 2-pole Super Smoother, 3-pole Super Smoother, Filt11, Triangle Window, Hamming Window, Hann Window, Lowpass, DSSS.
Postfilter options are applied last:
Stochastic - Stochastic
Super Smooth Stochastic - Super Smooth Stochastic (part of MESA Stochastic ) by John F. Ehlers
Inverse Fisher Transform - Inverse Fisher Transform
Noise Elimination Technology - a simplified Kendall correlation algorithm "Noise Elimination Technology" by John F. Ehlers
Momentum - momentum (derivative)
Except for Inverse Fisher Transform , all Postfilter algorithms can have Length parameter. If it is not specified (set to 0), then the calculated Slow MA Length is used. If Filter/MA Length is less than 2 or Postfilter Length is less than 1, they are calculated as a multiplier of the calculated oscillator length.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.
Keltner Channel Enhanced [DCAUT]█ Keltner Channel Enhanced
📊 ORIGINALITY & INNOVATION
The Keltner Channel Enhanced represents an important advancement over standard Keltner Channel implementations by introducing dual flexibility in moving average selection for both the middle band and ATR calculation. While traditional Keltner Channels typically use EMA for the middle band and RMA (Wilder's smoothing) for ATR, this enhanced version provides access to 25+ moving average algorithms for both components, enabling traders to fine-tune the indicator's behavior to match specific market characteristics and trading approaches.
Key Advancements:
Dual MA Algorithm Flexibility: Independent selection of moving average types for middle band (25+ options) and ATR smoothing (25+ options), allowing optimization of both trend identification and volatility measurement separately
Enhanced Trend Sensitivity: Ability to use faster algorithms (HMA, T3) for middle band while maintaining stable volatility measurement with traditional ATR smoothing, or vice versa for different trading strategies
Adaptive Volatility Measurement: Choice of ATR smoothing algorithm affects channel responsiveness to volatility changes, from highly reactive (SMA, EMA) to smoothly adaptive (RMA, TEMA)
Comprehensive Alert System: Five distinct alert conditions covering breakouts, trend changes, and volatility expansion, enabling automated monitoring without constant chart observation
Multi-Timeframe Compatibility: Works effectively across all timeframes from intraday scalping to long-term position trading, with independent optimization of trend and volatility components
This implementation addresses key limitations of standard Keltner Channels: fixed EMA/RMA combination may not suit all market conditions or trading styles. By decoupling the trend component from volatility measurement and allowing independent algorithm selection, traders can create highly customized configurations for specific instruments and market phases.
📐 MATHEMATICAL FOUNDATION
Keltner Channel Enhanced uses a three-component calculation system that combines a flexible moving average middle band with ATR-based (Average True Range) upper and lower channels, creating volatility-adjusted trend-following bands.
Core Calculation Process:
1. Middle Band (Basis) Calculation:
The basis line is calculated using the selected moving average algorithm applied to the price source over the specified period:
basis = ma(source, length, maType)
Supported algorithms include EMA (standard choice, trend-biased), SMA (balanced and symmetric), HMA (reduced lag), WMA, VWMA, TEMA, T3, KAMA, and 17+ others.
2. Average True Range (ATR) Calculation:
ATR measures market volatility by calculating the average of true ranges over the specified period:
trueRange = max(high - low, abs(high - close ), abs(low - close ))
atrValue = ma(trueRange, atrLength, atrMaType)
ATR smoothing algorithm significantly affects channel behavior, with options including RMA (standard, very smooth), SMA (moderate smoothness), EMA (fast adaptation), TEMA (smooth yet responsive), and others.
3. Channel Calculation:
Upper and lower channels are positioned at specified multiples of ATR from the basis:
upperChannel = basis + (multiplier × atrValue)
lowerChannel = basis - (multiplier × atrValue)
Standard multiplier is 2.0, providing channels that dynamically adjust width based on market volatility.
Keltner Channel vs. Bollinger Bands - Key Differences:
While both indicators create volatility-based channels, they use fundamentally different volatility measures:
Keltner Channel (ATR-based):
Uses Average True Range to measure actual price movement volatility
Incorporates gaps and limit moves through true range calculation
More stable in trending markets, less prone to extreme compression
Better reflects intraday volatility and trading range
Typically fewer band touches, making touches more significant
More suitable for trend-following strategies
Bollinger Bands (Standard Deviation-based):
Uses statistical standard deviation to measure price dispersion
Based on closing prices only, doesn't account for intraday range
Can compress significantly during consolidation (squeeze patterns)
More touches in ranging markets
Better suited for mean-reversion strategies
Provides statistical probability framework (95% within 2 standard deviations)
Algorithm Combination Effects:
The interaction between middle band MA type and ATR MA type creates different indicator characteristics:
Trend-Focused Configuration (Fast MA + Slow ATR): Middle band uses HMA/EMA/T3, ATR uses RMA/TEMA, quick trend changes with stable channel width, suitable for trend-following
Volatility-Focused Configuration (Slow MA + Fast ATR): Middle band uses SMA/WMA, ATR uses EMA/SMA, stable trend with dynamic channel width, suitable for volatility trading
Balanced Configuration (Standard EMA/RMA): Classic Keltner Channel behavior, time-tested combination, suitable for general-purpose trend following
Adaptive Configuration (KAMA + KAMA): Self-adjusting indicator responding to efficiency ratio, suitable for markets with varying trend strength and volatility regimes
📊 COMPREHENSIVE SIGNAL ANALYSIS
Keltner Channel Enhanced provides multiple signal categories optimized for trend-following and breakout strategies.
Channel Position Signals:
Upper Channel Interaction:
Price Touching Upper Channel: Strong bullish momentum, price moving more than typical volatility range suggests, potential continuation signal in established uptrends
Price Breaking Above Upper Channel: Exceptional strength, price exceeding normal volatility expectations, consider adding to long positions or tightening trailing stops
Price Riding Upper Channel: Sustained strong uptrend, characteristic of powerful bull moves, stay with trend and avoid premature profit-taking
Price Rejection at Upper Channel: Momentum exhaustion signal, consider profit-taking on longs or waiting for pullback to middle band for reentry
Lower Channel Interaction:
Price Touching Lower Channel: Strong bearish momentum, price moving more than typical volatility range suggests, potential continuation signal in established downtrends
Price Breaking Below Lower Channel: Exceptional weakness, price exceeding normal volatility expectations, consider adding to short positions or protecting against further downside
Price Riding Lower Channel: Sustained strong downtrend, characteristic of powerful bear moves, stay with trend and avoid premature covering
Price Rejection at Lower Channel: Momentum exhaustion signal, consider covering shorts or waiting for bounce to middle band for reentry
Middle Band (Basis) Signals:
Trend Direction Confirmation:
Price Above Basis: Bullish trend bias, middle band acts as dynamic support in uptrends, consider long positions or holding existing longs
Price Below Basis: Bearish trend bias, middle band acts as dynamic resistance in downtrends, consider short positions or avoiding longs
Price Crossing Above Basis: Potential trend change from bearish to bullish, early signal to establish long positions
Price Crossing Below Basis: Potential trend change from bullish to bearish, early signal to establish short positions or exit longs
Pullback Trading Strategy:
Uptrend Pullback: Price pulls back from upper channel to middle band, finds support, and resumes upward, ideal long entry point
Downtrend Bounce: Price bounces from lower channel to middle band, meets resistance, and resumes downward, ideal short entry point
Basis Test: Strong trends often show price respecting the middle band as support/resistance on pullbacks
Failed Test: Price breaking through middle band against trend direction signals potential reversal
Volatility-Based Signals:
Narrow Channels (Low Volatility):
Consolidation Phase: Channels contract during periods of reduced volatility and directionless price action
Breakout Preparation: Narrow channels often precede significant directional moves as volatility cycles
Trading Approach: Reduce position sizes, wait for breakout confirmation, avoid range-bound strategies within channels
Breakout Direction: Monitor for price breaking decisively outside channel range with expanding width
Wide Channels (High Volatility):
Trending Phase: Channels expand during strong directional moves and increased volatility
Momentum Confirmation: Wide channels confirm genuine trend with substantial volatility backing
Trading Approach: Trend-following strategies excel, wider stops necessary, mean-reversion strategies risky
Exhaustion Signs: Extreme channel width (historical highs) may signal approaching consolidation or reversal
Advanced Pattern Recognition:
Channel Walking Pattern:
Upper Channel Walk: Price consistently touches or exceeds upper channel while staying above basis, very strong uptrend signal, hold longs aggressively
Lower Channel Walk: Price consistently touches or exceeds lower channel while staying below basis, very strong downtrend signal, hold shorts aggressively
Basis Support/Resistance: During channel walks, price typically uses middle band as support/resistance on minor pullbacks
Pattern Break: Price crossing basis during channel walk signals potential trend exhaustion
Squeeze and Release Pattern:
Squeeze Phase: Channels narrow significantly, price consolidates near middle band, volatility contracts
Direction Clues: Watch for price positioning relative to basis during squeeze (above = bullish bias, below = bearish bias)
Release Trigger: Price breaking outside narrow channel range with expanding width confirms breakout
Follow-Through: Measure squeeze height and project from breakout point for initial profit targets
Channel Expansion Pattern:
Breakout Confirmation: Rapid channel widening confirms volatility increase and genuine trend establishment
Entry Timing: Enter positions early in expansion phase before trend becomes overextended
Risk Management: Use channel width to size stops appropriately, wider channels require wider stops
Basis Bounce Pattern:
Clean Bounce: Price touches middle band and immediately reverses, confirms trend strength and entry opportunity
Multiple Bounces: Repeated basis bounces indicate strong, sustainable trend
Bounce Failure: Price penetrating basis signals weakening trend and potential reversal
Divergence Analysis:
Price/Channel Divergence: Price makes new high/low while staying within channel (not reaching outer band), suggests momentum weakening
Width/Price Divergence: Price breaks to new extremes but channel width contracts, suggests move lacks conviction
Reversal Signal: Divergences often precede trend reversals or significant consolidation periods
Multi-Timeframe Analysis:
Keltner Channels work particularly well in multi-timeframe trend-following approaches:
Three-Timeframe Alignment:
Higher Timeframe (Weekly/Daily): Identify major trend direction, note price position relative to basis and channels
Intermediate Timeframe (Daily/4H): Identify pullback opportunities within higher timeframe trend
Lower Timeframe (4H/1H): Time precise entries when price touches middle band or lower channel (in uptrends) with rejection
Optimal Entry Conditions:
Best Long Entries: Higher timeframe in uptrend (price above basis), intermediate timeframe pulls back to basis, lower timeframe shows rejection at middle band or lower channel
Best Short Entries: Higher timeframe in downtrend (price below basis), intermediate timeframe bounces to basis, lower timeframe shows rejection at middle band or upper channel
Risk Management: Use higher timeframe channel width to set position sizing, stops below/above higher timeframe channels
🎯 STRATEGIC APPLICATIONS
Keltner Channel Enhanced excels in trend-following and breakout strategies across different market conditions.
Trend Following Strategy:
Setup Requirements:
Identify established trend with price consistently on one side of basis line
Wait for pullback to middle band (basis) or brief penetration through it
Confirm trend resumption with price rejection at basis and move back toward outer channel
Enter in trend direction with stop beyond basis line
Entry Rules:
Uptrend Entry:
Price pulls back from upper channel to middle band, shows support at basis (bullish candlestick, momentum divergence)
Enter long on rejection/bounce from basis with stop 1-2 ATR below basis
Aggressive: Enter on first touch; Conservative: Wait for confirmation candle
Downtrend Entry:
Price bounces from lower channel to middle band, shows resistance at basis (bearish candlestick, momentum divergence)
Enter short on rejection/reversal from basis with stop 1-2 ATR above basis
Aggressive: Enter on first touch; Conservative: Wait for confirmation candle
Trend Management:
Trailing Stop: Use basis line as dynamic trailing stop, exit if price closes beyond basis against position
Profit Taking: Take partial profits at opposite channel, move stops to basis
Position Additions: Add to winners on subsequent basis bounces if trend intact
Breakout Strategy:
Setup Requirements:
Identify consolidation period with contracting channel width
Monitor price action near middle band with reduced volatility
Wait for decisive breakout beyond channel range with expanding width
Enter in breakout direction after confirmation
Breakout Confirmation:
Price breaks clearly outside channel (upper for longs, lower for shorts), channel width begins expanding from contracted state
Volume increases significantly on breakout (if using volume analysis)
Price sustains outside channel for multiple bars without immediate reversal
Entry Approaches:
Aggressive: Enter on initial break with stop at opposite channel or basis, use smaller position size
Conservative: Wait for pullback to broken channel level, enter on rejection and resumption, tighter stop
Volatility-Based Position Sizing:
Adjust position sizing based on channel width (ATR-based volatility):
Wide Channels (High ATR): Reduce position size as stops must be wider, calculate position size using ATR-based risk calculation: Risk / (Stop Distance in ATR × ATR Value)
Narrow Channels (Low ATR): Increase position size as stops can be tighter, be cautious of impending volatility expansion
ATR-Based Risk Management: Use ATR-based risk calculations, position size = 0.01 × Capital / (2 × ATR), use multiples of ATR (1-2 ATR) for adaptive stops
Algorithm Selection Guidelines:
Different market conditions benefit from different algorithm combinations:
Strong Trending Markets: Middle band use EMA or HMA, ATR use RMA, capture trends quickly while maintaining stable channel width
Choppy/Ranging Markets: Middle band use SMA or WMA, ATR use SMA or WMA, avoid false trend signals while identifying genuine reversals
Volatile Markets: Middle band and ATR both use KAMA or FRAMA, self-adjusting to changing market conditions reduces manual optimization
Breakout Trading: Middle band use SMA, ATR use EMA or SMA, stable trend with dynamic channels highlights volatility expansion early
Scalping/Day Trading: Middle band use HMA or T3, ATR use EMA or TEMA, both components respond quickly
Position Trading: Middle band use EMA/TEMA/T3, ATR use RMA or TEMA, filter out noise for long-term trend-following
📋 DETAILED PARAMETER CONFIGURATION
Understanding and optimizing parameters is essential for adapting Keltner Channel Enhanced to specific trading approaches.
Source Parameter:
Close (Most Common): Uses closing price, reflects daily settlement, best for end-of-day analysis and position trading, standard choice
HL2 (Median Price): Smooths out closing bias, better represents full daily range in volatile markets, good for swing trading
HLC3 (Typical Price): Gives more weight to close while including full range, popular for intraday applications, slightly more responsive than HL2
OHLC4 (Average Price): Most comprehensive price representation, smoothest option, good for gap-prone markets or highly volatile instruments
Length Parameter:
Controls the lookback period for middle band (basis) calculation:
Short Periods (10-15): Very responsive to price changes, suitable for day trading and scalping, higher false signal rate
Standard Period (20 - Default): Represents approximately one month of trading, good balance between responsiveness and stability, suitable for swing and position trading
Medium Periods (30-50): Smoother trend identification, fewer false signals, better for position trading and longer holding periods
Long Periods (50+): Very smooth, identifies major trends only, minimal false signals but significant lag, suitable for long-term investment
Optimization by Timeframe: 1-15 minute charts use 10-20 period, 30-60 minute charts use 20-30 period, 4-hour to daily charts use 20-40 period, weekly charts use 20-30 weeks.
ATR Length Parameter:
Controls the lookback period for Average True Range calculation, affecting channel width:
Short ATR Periods (5-10): Very responsive to recent volatility changes, standard is 10 (Keltner's original specification), may be too reactive in whipsaw conditions
Standard ATR Period (10 - Default): Chester Keltner's original specification, good balance between responsiveness and stability, most widely used
Medium ATR Periods (14-20): Smoother channel width, ATR 14 aligns with Wilder's original ATR specification, good for position trading
Long ATR Periods (20+): Very smooth channel width, suitable for long-term trend-following
Length vs. ATR Length Relationship: Equal values (20/20) provide balanced responsiveness, longer ATR (20/14) gives more stable channel width, shorter ATR (20/10) is standard configuration, much shorter ATR (20/5) creates very dynamic channels.
Multiplier Parameter:
Controls channel width by setting ATR multiples:
Lower Values (1.0-1.5): Tighter channels with frequent price touches, more trading signals, higher false signal rate, better for range-bound and mean-reversion strategies
Standard Value (2.0 - Default): Chester Keltner's recommended setting, good balance between signal frequency and reliability, suitable for both trending and ranging strategies
Higher Values (2.5-3.0): Wider channels with less frequent touches, fewer but potentially higher-quality signals, better for strong trending markets
Market-Specific Optimization: High volatility markets (crypto, small-caps) use 2.5-3.0 multiplier, medium volatility markets (major forex, large-caps) use 2.0 multiplier, low volatility markets (bonds, utilities) use 1.5-2.0 multiplier.
MA Type Parameter (Middle Band):
Critical selection that determines trend identification characteristics:
EMA (Exponential Moving Average - Default): Standard Keltner Channel choice, Chester Keltner's original specification, emphasizes recent prices, faster response to trend changes, suitable for all timeframes
SMA (Simple Moving Average): Equal weighting of all data points, no directional bias, slower than EMA, better for ranging markets and mean-reversion
HMA (Hull Moving Average): Minimal lag with smooth output, excellent for fast trend identification, best for day trading and scalping
TEMA (Triple Exponential Moving Average): Advanced smoothing with reduced lag, responsive to trends while filtering noise, suitable for volatile markets
T3 (Tillson T3): Very smooth with minimal lag, excellent for established trend identification, suitable for position trading
KAMA (Kaufman Adaptive Moving Average): Automatically adjusts speed based on market efficiency, slow in ranging markets, fast in trends, suitable for markets with varying conditions
ATR MA Type Parameter:
Determines how Average True Range is smoothed, affecting channel width stability:
RMA (Wilder's Smoothing - Default): J. Welles Wilder's original ATR smoothing method, very smooth, slow to adapt to volatility changes, provides stable channel width
SMA (Simple Moving Average): Equal weighting, moderate smoothness, faster response to volatility changes than RMA, more dynamic channel width
EMA (Exponential Moving Average): Emphasizes recent volatility, quick adaptation to new volatility regimes, very responsive channel width changes
TEMA (Triple Exponential Moving Average): Smooth yet responsive, good balance for varying volatility, suitable for most trading styles
Parameter Combination Strategies:
Conservative Trend-Following: Length 30/ATR Length 20/Multiplier 2.5, MA Type EMA or TEMA/ATR MA Type RMA, smooth trend with stable wide channels, suitable for position trading
Standard Balanced Approach: Length 20/ATR Length 10/Multiplier 2.0, MA Type EMA/ATR MA Type RMA, classic Keltner Channel configuration, suitable for general purpose swing trading
Aggressive Day Trading: Length 10-15/ATR Length 5-7/Multiplier 1.5-2.0, MA Type HMA or EMA/ATR MA Type EMA or SMA, fast trend with dynamic channels, suitable for scalping and day trading
Breakout Specialist: Length 20-30/ATR Length 5-10/Multiplier 2.0, MA Type SMA or WMA/ATR MA Type EMA or SMA, stable trend with responsive channel width
Adaptive All-Conditions: Length 20/ATR Length 10/Multiplier 2.0, MA Type KAMA or FRAMA/ATR MA Type KAMA or TEMA, self-adjusting to market conditions
Offset Parameter:
Controls horizontal positioning of channels on chart. Positive values shift channels to the right (future) for visual projection, negative values shift left (past) for historical analysis, zero (default) aligns with current price bars for real-time signal analysis. Offset affects only visual display, not alert conditions or actual calculations.
📈 PERFORMANCE ANALYSIS & COMPETITIVE ADVANTAGES
Keltner Channel Enhanced provides improvements over standard implementations while maintaining proven effectiveness.
Response Characteristics:
Standard EMA/RMA Configuration: Moderate trend lag (approximately 0.4 × length periods), smooth and stable channel width from RMA smoothing, good balance for most market conditions
Fast HMA/EMA Configuration: Approximately 60% reduction in trend lag compared to EMA, responsive channel width from EMA ATR smoothing, suitable for quick trend changes and breakouts
Adaptive KAMA/KAMA Configuration: Variable lag based on market efficiency, automatic adjustment to trending vs. ranging conditions, self-optimizing behavior reduces manual intervention
Comparison with Traditional Keltner Channels:
Enhanced Version Advantages:
Dual Algorithm Flexibility: Independent MA selection for trend and volatility vs. fixed EMA/RMA, separate tuning of trend responsiveness and channel stability
Market Adaptation: Choose configurations optimized for specific instruments and conditions, customize for scalping, swing, or position trading preferences
Comprehensive Alerts: Enhanced alert system including channel expansion detection
Traditional Version Advantages:
Simplicity: Fewer parameters, easier to understand and implement
Standardization: Fixed EMA/RMA combination ensures consistency across users
Research Base: Decades of backtesting and research on standard configuration
When to Use Enhanced Version: Trading multiple instruments with different characteristics, switching between trending and ranging markets, employing different strategies, algorithm-based trading systems requiring customization, seeking optimization for specific trading style and timeframe.
When to Use Standard Version: Beginning traders learning Keltner Channel concepts, following published research or trading systems, preferring simplicity and standardization, wanting to avoid optimization and curve-fitting risks.
Performance Across Market Conditions:
Strong Trending Markets: EMA or HMA basis with RMA or TEMA ATR smoothing provides quicker trend identification, pullbacks to basis offer excellent entry opportunities
Choppy/Ranging Markets: SMA or WMA basis with RMA ATR smoothing and lower multipliers, channel bounce strategies work well, avoid false breakouts
Volatile Markets: KAMA or FRAMA with EMA or TEMA, adaptive algorithms excel by automatic adjustment, wider multipliers (2.5-3.0) accommodate large price swings
Low Volatility/Consolidation: Channels narrow significantly indicating consolidation, algorithm choice less impactful, focus on detecting channel width contraction for breakout preparation
Keltner Channel vs. Bollinger Bands - Usage Comparison:
Favor Keltner Channels When: Trend-following is primary strategy, trading volatile instruments with gaps, want ATR-based volatility measurement, prefer fewer higher-quality channel touches, seeking stable channel width during trends.
Favor Bollinger Bands When: Mean-reversion is primary strategy, trading instruments with limited gaps, want statistical framework based on standard deviation, need squeeze patterns for breakout identification, prefer more frequent trading opportunities.
Use Both Together: Bollinger Band squeeze + Keltner Channel breakout is powerful combination, price outside Bollinger Bands but inside Keltner Channels indicates moderate signal, price outside both indicates very strong signal, Bollinger Bands for entries and Keltner Channels for trend confirmation.
Limitations and Considerations:
General Limitations:
Lagging Indicator: All moving averages lag price, even with reduced-lag algorithms
Trend-Dependent: Works best in trending markets, less effective in choppy conditions
No Direction Prediction: Indicates volatility and deviation, not future direction, requires confirmation
Enhanced Version Specific Considerations:
Optimization Risk: More parameters increase risk of curve-fitting historical data
Complexity: Additional choices may overwhelm beginning traders
Backtesting Challenges: Different algorithms produce different historical results
Mitigation Strategies:
Use Confirmation: Combine with momentum indicators (RSI, MACD), volume, or price action
Test Parameter Robustness: Ensure parameters work across range of values, not just optimized ones
Multi-Timeframe Analysis: Confirm signals across different timeframes
Proper Risk Management: Use appropriate position sizing and stops
Start Simple: Begin with standard EMA/RMA before exploring alternatives
Optimal Usage Recommendations:
For Maximum Effectiveness:
Start with standard EMA/RMA configuration to understand classic behavior
Experiment with alternatives on demo account or paper trading
Match algorithm combination to market condition and trading style
Use channel width analysis to identify market phases
Combine with complementary indicators for confirmation
Implement strict risk management using ATR-based position sizing
Focus on high-quality setups rather than trading every signal
Respect the trend: trade with basis direction for higher probability
Complementary Indicators:
RSI or Stochastic: Confirm momentum at channel extremes
MACD: Confirm trend direction and momentum shifts
Volume: Validate breakouts and trend strength
ADX: Measure trend strength, avoid Keltner signals in weak trends
Support/Resistance: Combine with traditional levels for high-probability setups
Bollinger Bands: Use together for enhanced breakout and volatility analysis
USAGE NOTES
This indicator is designed for technical analysis and educational purposes. Keltner Channel Enhanced has limitations and should not be used as the sole basis for trading decisions. While the flexible moving average selection for both trend and volatility components provides valuable adaptability across different market conditions, algorithm performance varies with market conditions, and past characteristics do not guarantee future results.
Key considerations:
Always use multiple forms of analysis and confirmation before entering trades
Backtest any parameter combination thoroughly before live trading
Be aware that optimization can lead to curve-fitting if not done carefully
Start with standard EMA/RMA settings and adjust only when specific conditions warrant
Understand that no moving average algorithm can eliminate lag entirely
Consider market regime (trending, ranging, volatile) when selecting parameters
Use ATR-based position sizing and risk management on every trade
Keltner Channels work best in trending markets, less effective in choppy conditions
Respect the trend direction indicated by price position relative to basis line
The enhanced flexibility of dual algorithm selection provides powerful tools for adaptation but requires responsible use, thorough understanding of how different algorithms behave under various market conditions, and disciplined risk management.
[blackcat] L1 Net Volume DifferenceOVERVIEW
The L1 Net Volume Difference indicator serves as an advanced analytical tool designed to provide traders with deep insights into market sentiment by examining the differential between buying and selling volumes over precise timeframes. By leveraging these volume dynamics, it helps identify trends and potential reversal points more accurately, thereby supporting well-informed decision-making processes. The key focus lies in dissecting intraday changes that reflect short-term market behavior, offering critical input for both swing and day traders alike. 📊
Key benefits encompass:
• Precise calculation of net volume differences grounded in real-time data.
• Interactive visualization elements enhancing interpretability effortlessly.
• Real-time generation of buy/sell signals driven by dynamic volume shifts.
TECHNICAL ANALYSIS COMPONENTS
📉 Volume Accumulation Mechanisms:
Monitors cumulative buy/sell volumes derived from comparative closing prices.
Periodically resets accumulation counters aligning with predefined intervals (e.g., 5-minute bars).
Facilitates identification of directional biases reflecting underlying market forces accurately.
🕵️♂️ Sentiment Detection Algorithms:
Employs proprietary logic distinguishing between bullish/bearish sentiments dynamically.
Ensures consistent adherence to predefined statistical protocols maintaining accuracy.
Supports adaptive thresholds adjusting sensitivities based on changing market conditions flexibly.
🎯 Dynamic Signal Generation:
Detects transitions indicating dominance shifts between buyers/sellers promptly.
Triggers timely alerts enabling swift reactions to evolving market dynamics effectively.
Integrates conditional logic reinforcing signal validity minimizing erroneous activations.
INDICATOR FUNCTIONALITY
🔢 Core Algorithms:
Utilizes moving averages along with standardized deviation formulas generating precise net volume measurements.
Implements Arithmetic Mean Line Algorithm (AMLA) smoothing techniques improving interpretability.
Ensures consistent alignment with established statistical principles preserving fidelity.
🖱️ User Interface Elements:
Dedicated plots displaying real-time net volume markers facilitating swift decision-making.
Context-sensitive color coding distinguishing positive/negative deviations intuitively.
Background shading highlighting proximity to key threshold activations enhancing visibility.
STRATEGY IMPLEMENTATION
✅ Entry Conditions:
Confirm bullish/bearish setups validated through multiple confirmatory signals.
Validate entry decisions considering concurrent market sentiment factors.
Assess alignment between net volume readings and broader trend directions ensuring coherence.
🚫 Exit Mechanisms:
Trigger exits upon hitting predetermined thresholds derived from historical analyses.
Monitor continuous breaches signifying potential trend reversals promptly executing closures.
Execute partial/total closes contingent upon cumulative loss limits preserving capital efficiently.
PARAMETER CONFIGURATIONS
🎯 Optimization Guidelines:
Reset Interval: Governs responsiveness versus stability balancing sensitivity/stability.
Price Source: Dictates primary data series driving volume calculations selecting relevant inputs accurately.
💬 Customization Recommendations:
Commence with baseline defaults; iteratively refine parameters isolating individual impacts.
Evaluate adjustments independently prior to combined modifications minimizing disruptions.
Prioritize minimizing erroneous trigger occurrences first optimizing signal fidelity.
Sustain balanced risk-reward profiles irrespective of chosen settings upholding disciplined approaches.
ADVANCED RISK MANAGEMENT
🛡️ Proactive Risk Mitigation Techniques:
Enforce strict compliance with pre-defined maximum leverage constraints adhering strictly to guidelines.
Mandatorily apply trailing stop-loss orders conforming to script outputs reinforcing discipline.
Allocate positions proportionately relative to available capital reserves managing exposures prudently.
Conduct periodic reviews gauging strategy effectiveness rigorously identifying areas needing refinement.
⚠️ Potential Pitfalls & Solutions:
Address frequent violations arising during heightened volatility phases necessitating manual interventions judiciously.
Manage false alerts warranting immediate attention avoiding adverse consequences systematically.
Prepare contingency plans mitigating margin call possibilities preparing proactive responses effectively.
Continuously assess automated system reliability amidst fluctuating conditions ensuring seamless functionality.
PERFORMANCE AUDITS & REFINEMENTS
🔍 Critical Evaluation Metrics:
Assess win percentages consistently across diverse trading instruments gauging reliability.
Calculate average profit ratios per successful execution measuring profitability efficiency accurately.
Measure peak drawdown durations alongside associated magnitudes evaluating downside risks comprehensively.
Analyze signal generation frequencies revealing hidden patterns potentially skewing outcomes uncovering systematic biases.
📈 Historical Data Analysis Tools:
Maintain comprehensive records capturing every triggered event meticulously documenting results.
Compare realized profits/losses against backtested simulations benchmarking actual vs expected performances accurately.
Identify recurrent systematic errors demanding corrective actions implementing iterative refinements steadily.
Document evolving performance metrics tracking progress dynamically addressing identified shortcomings proactively.
PROBLEM SOLVING ADVICE
🔧 Frequent Encountered Challenges:
Unpredictable behaviors emerging within thinly traded markets requiring filtration processes.
Latency issues manifesting during abrupt price fluctuations causing missed opportunities.
Overfitted models yielding suboptimal results post-extensive tuning demanding recalibrations.
Inaccuracies stemming from incomplete/inaccurate data feeds necessitating verification procedures.
💡 Effective Resolution Pathways:
Exclude low-liquidity assets prone to erratic movements enhancing signal integrity.
Introduce buffer intervals safeguarding major news/event impacts mitigating distortions effectively.
Limit ongoing optimization attempts preventing model degradation maintaining optimal performance levels consistently.
Verify reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations reliably.
USER ENGAGEMENT SEGMENT
🤝 Community Contributions Welcome
Highly encourage active participation sharing experiences & recommendations!
THANKS
Heartfelt acknowledgment extends to all developers contributing invaluable insights about volume-based trading methodologies! ✨
Forex Pips Tracker PinescriptlabsThis algorithm is exclusively designed for the Forex market 🌐 and serves as a tool to measure volatility, helping to determine on average how many pips positions move per hour. With this information, a trader can place take profit and stop loss orders with greater certainty, since they know the average pip movement range during each hour of the day.
What does it do and how does it work?
• Volatility measurement in pips 📊:
The algorithm calculates the size of the movement (or range) of each candle expressed in pips. To do this, it takes the difference between the highest and lowest price of each candle and converts it into pips.
👉
• Time zone adjustment ⏰:
It allows you to configure the time zone so that the data aligns with your desired schedule. This is especially useful for comparing movements at different times based on the trader's location.
• Analysis by time intervals 🕒:
The algorithm’s logic organizes the information for each hour of the day. It stores data for the current day, the previous day, weekly, and historically (200 candles). This allows you to see how volatility varies across different periods, providing a dynamic view of market behavior.
👉
• Directionality of movement 🔄:
In addition to averaging the pip range, the algorithm determines the predominant direction of each candle (bullish or bearish). This translates into visual indicators (like arrows) that help identify whether, on average, the movement during that hour tends to go up or down.
• Table visualization 📈:
Finally, the information is presented in an integrated table on the chart. Each row corresponds to an hour of the day and shows the average number of pips and the direction (bullish, bearish, or neutral) for each analyzed period. This table makes it easy to quickly and practically interpret the volatility data.
By combining these features, the algorithm becomes an essential tool for traders looking to better understand market dynamics and optimize their trading strategies! 💼✨
Español:
Este algoritmo está diseñado exclusivamente para el mercado Forex 🌐 y sirve como una herramienta para medir la volatilidad, ayudando a determinar en promedio cuántos pips se mueven las posiciones por hora. Con esta información, un trader puede colocar el take profit y el stop loss con mayor certeza, ya que conoce el rango promedio de movimiento en pips durante cada hora del día.
¿Qué hace y cómo funciona?
• Medición de volatilidad en pips 📊:
El algoritmo calcula el tamaño del movimiento (o rango) de cada vela expresado en pips. Para ello, toma la diferencia entre el precio máximo y el mínimo de cada vela y la convierte a pips.
👉
• Ajuste de zona horaria ⏰:
Permite configurar la zona horaria para que los datos se ajusten al horario deseado. Esto es especialmente útil para comparar movimientos durante distintas horas en función de la localización del trader.
• Análisis por intervalos de tiempo 🕒:
La lógica del algoritmo organiza la información por cada hora del día. Guarda datos para el día actual, el día anterior, a nivel semanal e histórico (200 velas). Esto permite ver cómo varía la volatilidad en diferentes periodos, proporcionando una visión dinámica del comportamiento del mercado.
👉
• Direccionalidad del movimiento 🔄:
Además de promediar el rango en pips, el algoritmo determina la dirección predominante de cada vela (alcista o bajista). Esto se traduce en indicadores visuales (como flechas) que permiten identificar si, en promedio, el movimiento en esa hora tiende a subir o bajar.
• Visualización en tabla 📈:
Finalmente, la información se presenta en una tabla integrada en el gráfico. Cada fila corresponde a una hora del día y muestra el promedio de pips y la dirección (alcista, bajista o neutral) para cada uno de los periodos analizados. Esta tabla facilita la interpretación rápida y práctica de los datos de volatilidad.
Al combinar estas funciones, el algoritmo se convierte en una herramienta esencial para traders que buscan entender mejor la dinámica del mercado y optimizar sus estrategias de trading! 💼✨
Dual Zigzag [Trendoscope®]🎲 Dual Zigzag indicator is built on recursive zigzag algorithm. It is very similar to other zigzag indicators published by us and other authors. However, the key point here is, the indicator draws zigzag on both price and any other plot based indicator on separate layouts.
Before we get into the indicator, here are some brief descriptions of the underlying concepts and key terminologies
🎯 Zigzag
Zigzag indicator breaks down price or any input series into a series of Pivot Highs and Pivot Lows alternating between each other. Zigzags though shows pivot high and lows, should not be used for buying at low and selling at high. The main application of zigzag indicator is for the visualisation of market structure and this can be used as basic building block for any pattern recognition algorithms.
🎯 Recursive Zigzag Algorithm
Recursive zigzag algorithm builds zigzag on multiple levels and each level of zigzag is based on the previous level pivots. The level zero zigzag is built on price. However, for level 1, instead of price level 0 zigzag pivots are used. Similarly for level 2, level 1 zigzag pivots are used as base.
🎲 Components Dual Zigzag Indicator
Here are the components of Dual zigzag indicator
Built in Oscillator - Indicator has built in oscillator options for plotting RSI (Relative Strength Index), MFI (Money Flow Index), cci (Commodity Channel Index) , CMO (Chande Momentum Oscillator), COG (Center of Gravity), and ROC (Rate of Change). Apart from the given built in oscillators, users can also use a custom external output as base. The oscillators are not printed on the price pane. But, printed on a separate indicator overlay.
Zigzag On Oscillator - Recursive zigzag is calculated and printed on the oscillator series. Each pivot high and pivot low also prints a label having the retracement ratios, and price levels at those points. Zigzag on the oscillator is also printed on the indicator overlay pane.
Zigzag on Price - Recursive zigzag calculated based on price and printed on the price pane. This is made possible by using force_overlay option present in the drawing objects. At each zigzag pivot levels, the label having price retracement ratios, and oscillator values are printed.
It is called dual zigzag because, the indicator calculates the zigzag on both price and oscillator series of values and prints them separately on different panes on the chart.
🎲 Indicator Settings
Settings include
Theme display settings to get the right colour combination to match the background.
Zigzag settings to be used for zigzag calculation and display
Oscillator settings to chose the oscillator to be used as base for 2nd zigzag
🎲 Applications
Useful in spotting divergences with both indicator and price having their own zigzag to highlight pivots
Spotting patterns in indicators/oscillators and correlate them with the patterns on price
🎲 Using External Input
If users want to use an external indicator such as OBV instead of the built in oscillators, then can do so by using the custom option.
Here is how this can be done.
Step1. Add both Dual Zigzag and the intended indicator (in this case OBV) on the chart. Notice that both OBV and Dual zigzag appear on different panes.
Step2. Edit the indicator settings of Dual zigzag and set custom indicator by selecting "custom" as oscillator name and then by setting the custom external indicator name and input.
Step 3. You would notice that the zigzag in Dual Zigzag indictor pane is already showing the zigzag pivots based on the OBV indicator and the price pivots display obv values at the pivot points. We can leave this as is.
Step 4. As an additional step, you can also merge the OBV pane and the Dual zigzag indicator pane into one by going into OBV settings and moving the indicator to above pane. Merge the scales so that there is no two scales on the same pane and the entire scale appear on the right.
At the end, you should see two panes - one with price and other with OBV and both having their zigzag plotted.
[Pandora] Vast Volatility Treasure TroveINTRODUCTION:
Volatility enthusiasts, prepare for VICTORY on this day of July 4th, 2024! This is my "Vast Volatility Treasure Trove," intended mostly for educational purposes, yet these functions will also exhibit versatility when combined with other algorithms to garner statistical excellence. Once again, I am now ripping the lid off of Pandora's box... of volatility. Inside this script is a 'vast' collection of volatility estimators, reflecting the indicators name. Whether you are a seasoned trader destined to navigate financial strife or an eagerly curious learner, this script offers a comprehensive toolkit for a broad spectrum of volatility analysis. Enjoy your journey through the realm of market volatility with this code!
WHAT IS MARKET VOLATILITY?:
Market volatility refers to various fluctuations in the value of a financial market or asset over a period of time, often characterized by occasional rapid and significant deviations in price. During periods of greater market volatility, evolving conditions of prices can move rapidly in either direction, creating uncertainty for investors with results of sharp declines as well as rapid gains. However, market volatility is a typical aspect expected in financial markets that can also present opportunities for informed decision-making and potential benefits from the price flux.
SCRIPT INTENTION:
Volatility is assuredly omnipresent, waxing and waning in magnitude, and some readers have every intention of studying and/or measuring it. This script serves as an all-in-one armada of volatility estimators for TradingView members. I set out to provide a diverse set of tools to analyze and interpret market volatility, offering volatile insights, and aid with the development of robust trading indicators and strategies.
In today's fast-paced financial markets, understanding and quantifying volatility is informative for both seasoned traders and novice investors. This script is designed to empower users by equipping them with a comprehensive suite of volatility estimators. Each function within this script has been meticulously crafted to address various aspects of volatility, from traditional methods like Garman-Klass and Parkinson to more advanced techniques like Yang-Zhang and my custom experimental algorithms.
Ultimately, this script is more than just a collection of functions. It is a gateway to a deeper understanding of market volatility and a valuable resource for anyone committed to mastering the complexities of financial markets.
SCRIPT CONTENTS:
This script includes a variety of functions designed to measure and analyze market volatility. Where applicable, an input checkbox option provides an unbiased/biased estimate. Below is a brief description of each function in the original order they appear as code upon first publish:
Parkinson Volatility - Estimates volatility emphasizing the high and low range movements.
Alternate Parkinson Volatility - Simpler version of the original Parkinson Volatility that I realized.
Garman-Klass Volatility - Estimates volatility based on high, low, open, and close prices using a formula that adjusts for biases in price dynamics.
Rogers-Satchell-Yoon Volatility #1 - Estimates volatility based on logarithmic differences between high, low, open, and close values.
Rogers-Satchell-Yoon Volatility #2 - Similar estimate to Rogers-Satchell with the same result via an alternate formulation of volatility.
Yang-Zhang Volatility - An advanced volatility estimate combining both strengths of the Garman-Klass and Rogers-Satchell estimators, with weights determined by an alpha parameter.
Yang-Zhang (Modified) Volatility - My experimental modification slightly different from the Yang-Zhang formula with improved computational efficiency.
Selectable Volatility - Basic customizable volatility calculation based on the logarithmic difference between selected numerator and denominator prices (e.g., open, high, low, close).
Close-to-Close Volatility - Estimates volatility using the logarithmic difference between consecutive closing prices. Specifically applicable to data sources without open, high, and low prices.
Open-to-Close Volatility - (Overnight Volatility): Estimates volatility based on the logarithmic difference between the opening price and the last closing price emphasizing overnight gaps.
Hilo Volatility - Estimates volatility using a method similar to Parkinson's method, which considers the logarithm of the high and low prices.
Vantage Volatility - My experimental custom 'vantage' method to estimate volatility similar to Yang-Zhang, which incorporates various factors (Alpha, Beta, Gamma) to generate a weighted logarithmic calculation. This may be a volatility advantage or disadvantage, hence it's name.
Schwert Volatility - Estimates volatility based on arithmetic returns.
Historical Volatility - Estimates volatility considering logarithmic returns.
Annualized Historical Volatility - Estimates annualized volatility using logarithmic returns, adjusted for the number of trading days in a year.
If I omitted any other known varieties, detailed requests for future consideration can be made below for their inclusion into this script within future versions...
BONUS ALGORITHMS:
This script also includes several experimental and bonus functions that push the boundaries of volatility analysis as I understand it. These functions are designed to provide additional insights and also are my ideal notions for traders looking to explore other methods of volatility measurement.
VOLATILITY APPLICATIONS:
Volatility estimators serve a common role across various facets of trading and financial analysis, offering insights into market behavior. These tools are already in instrumental with enhancing risk management practices by providing a deeper understanding of market dynamics and the inherent uncertainty in asset prices. With volatility estimators, traders can effectively quantifying market risk and adjust their strategies accordingly, optimizing portfolio performance and mitigating potential losses. Additionally, volatility estimations may serve as indication for detecting overbought or oversold market conditions, offering probabilistic insights that could inform strategic decisions at turning points. This script
distinctly offers a variety of volatility estimators to navigate intricate financial terrains with informed judgment to address challenges of strategic planning.
CODE REUSE:
You don't have to ask for my permission to use/reuse these functions in your published scripts, simply because I have better things to do than answer requests for the reuse of these functions.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already.
Endpointed SSA of Price [Loxx]The Endpointed SSA of Price: A Comprehensive Tool for Market Analysis and Decision-Making
The financial markets present sophisticated challenges for traders and investors as they navigate the complexities of market behavior. To effectively interpret and capitalize on these complexities, it is crucial to employ powerful analytical tools that can reveal hidden patterns and trends. One such tool is the Endpointed SSA of Price, which combines the strengths of Caterpillar Singular Spectrum Analysis, a sophisticated time series decomposition method, with insights from the fields of economics, artificial intelligence, and machine learning.
The Endpointed SSA of Price has its roots in the interdisciplinary fusion of mathematical techniques, economic understanding, and advancements in artificial intelligence. This unique combination allows for a versatile and reliable tool that can aid traders and investors in making informed decisions based on comprehensive market analysis.
The Endpointed SSA of Price is not only valuable for experienced traders but also serves as a useful resource for those new to the financial markets. By providing a deeper understanding of market forces, this innovative indicator equips users with the knowledge and confidence to better assess risks and opportunities in their financial pursuits.
█ Exploring Caterpillar SSA: Applications in AI, Machine Learning, and Finance
Caterpillar SSA (Singular Spectrum Analysis) is a non-parametric method for time series analysis and signal processing. It is based on a combination of principles from classical time series analysis, multivariate statistics, and the theory of random processes. The method was initially developed in the early 1990s by a group of Russian mathematicians, including Golyandina, Nekrutkin, and Zhigljavsky.
Background Information:
SSA is an advanced technique for decomposing time series data into a sum of interpretable components, such as trend, seasonality, and noise. This decomposition allows for a better understanding of the underlying structure of the data and facilitates forecasting, smoothing, and anomaly detection. Caterpillar SSA is a particular implementation of SSA that has proven to be computationally efficient and effective for handling large datasets.
Uses in AI and Machine Learning:
In recent years, Caterpillar SSA has found applications in various fields of artificial intelligence (AI) and machine learning. Some of these applications include:
1. Feature extraction: Caterpillar SSA can be used to extract meaningful features from time series data, which can then serve as inputs for machine learning models. These features can help improve the performance of various models, such as regression, classification, and clustering algorithms.
2. Dimensionality reduction: Caterpillar SSA can be employed as a dimensionality reduction technique, similar to Principal Component Analysis (PCA). It helps identify the most significant components of a high-dimensional dataset, reducing the computational complexity and mitigating the "curse of dimensionality" in machine learning tasks.
3. Anomaly detection: The decomposition of a time series into interpretable components through Caterpillar SSA can help in identifying unusual patterns or outliers in the data. Machine learning models trained on these decomposed components can detect anomalies more effectively, as the noise component is separated from the signal.
4. Forecasting: Caterpillar SSA has been used in combination with machine learning techniques, such as neural networks, to improve forecasting accuracy. By decomposing a time series into its underlying components, machine learning models can better capture the trends and seasonality in the data, resulting in more accurate predictions.
Application in Financial Markets and Economics:
Caterpillar SSA has been employed in various domains within financial markets and economics. Some notable applications include:
1. Stock price analysis: Caterpillar SSA can be used to analyze and forecast stock prices by decomposing them into trend, seasonal, and noise components. This decomposition can help traders and investors better understand market dynamics, detect potential turning points, and make more informed decisions.
2. Economic indicators: Caterpillar SSA has been used to analyze and forecast economic indicators, such as GDP, inflation, and unemployment rates. By decomposing these time series, researchers can better understand the underlying factors driving economic fluctuations and develop more accurate forecasting models.
3. Portfolio optimization: By applying Caterpillar SSA to financial time series data, portfolio managers can better understand the relationships between different assets and make more informed decisions regarding asset allocation and risk management.
Application in the Indicator:
In the given indicator, Caterpillar SSA is applied to a financial time series (price data) to smooth the series and detect significant trends or turning points. The method is used to decompose the price data into a set number of components, which are then combined to generate a smoothed signal. This signal can help traders and investors identify potential entry and exit points for their trades.
The indicator applies the Caterpillar SSA method by first constructing the trajectory matrix using the price data, then computing the singular value decomposition (SVD) of the matrix, and finally reconstructing the time series using a selected number of components. The reconstructed series serves as a smoothed version of the original price data, highlighting significant trends and turning points. The indicator can be customized by adjusting the lag, number of computations, and number of components used in the reconstruction process. By fine-tuning these parameters, traders and investors can optimize the indicator to better match their specific trading style and risk tolerance.
Caterpillar SSA is versatile and can be applied to various types of financial instruments, such as stocks, bonds, commodities, and currencies. It can also be combined with other technical analysis tools or indicators to create a comprehensive trading system. For example, a trader might use Caterpillar SSA to identify the primary trend in a market and then employ additional indicators, such as moving averages or RSI, to confirm the trend and generate trading signals.
In summary, Caterpillar SSA is a powerful time series analysis technique that has found applications in AI and machine learning, as well as financial markets and economics. By decomposing a time series into interpretable components, Caterpillar SSA enables better understanding of the underlying structure of the data, facilitating forecasting, smoothing, and anomaly detection. In the context of financial trading, the technique is used to analyze price data, detect significant trends or turning points, and inform trading decisions.
█ Input Parameters
This indicator takes several inputs that affect its signal output. These inputs can be classified into three categories: Basic Settings, UI Options, and Computation Parameters.
Source: This input represents the source of price data, which is typically the closing price of an asset. The user can select other price data, such as opening price, high price, or low price. The selected price data is then utilized in the Caterpillar SSA calculation process.
Lag: The lag input determines the window size used for the time series decomposition. A higher lag value implies that the SSA algorithm will consider a longer range of historical data when extracting the underlying trend and components. This parameter is crucial, as it directly impacts the resulting smoothed series and the quality of extracted components.
Number of Computations: This input, denoted as 'ncomp,' specifies the number of eigencomponents to be considered in the reconstruction of the time series. A smaller value results in a smoother output signal, while a higher value retains more details in the series, potentially capturing short-term fluctuations.
SSA Period Normalization: This input is used to normalize the SSA period, which adjusts the significance of each eigencomponent to the overall signal. It helps in making the algorithm adaptive to different timeframes and market conditions.
Number of Bars: This input specifies the number of bars to be processed by the algorithm. It controls the range of data used for calculations and directly affects the computation time and the output signal.
Number of Bars to Render: This input sets the number of bars to be plotted on the chart. A higher value slows down the computation but provides a more comprehensive view of the indicator's performance over a longer period. This value controls how far back the indicator is rendered.
Color bars: This boolean input determines whether the bars should be colored according to the signal's direction. If set to true, the bars are colored using the defined colors, which visually indicate the trend direction.
Show signals: This boolean input controls the display of buy and sell signals on the chart. If set to true, the indicator plots shapes (triangles) to represent long and short trade signals.
Static Computation Parameters:
The indicator also includes several internal parameters that affect the Caterpillar SSA algorithm, such as Maxncomp, MaxLag, and MaxArrayLength. These parameters set the maximum allowed values for the number of computations, the lag, and the array length, ensuring that the calculations remain within reasonable limits and do not consume excessive computational resources.
█ A Note on Endpionted, Non-repainting Indicators
An endpointed indicator is one that does not recalculate or repaint its past values based on new incoming data. In other words, the indicator's previous signals remain the same even as new price data is added. This is an important feature because it ensures that the signals generated by the indicator are reliable and accurate, even after the fact.
When an indicator is non-repainting or endpointed, it means that the trader can have confidence in the signals being generated, knowing that they will not change as new data comes in. This allows traders to make informed decisions based on historical signals, without the fear of the signals being invalidated in the future.
In the case of the Endpointed SSA of Price, this non-repainting property is particularly valuable because it allows traders to identify trend changes and reversals with a high degree of accuracy, which can be used to inform trading decisions. This can be especially important in volatile markets where quick decisions need to be made.
Bogdan Ciocoiu - LitigatorDescription
The Litigator is an indicator that encapsulates the value delivered by the Relative Strength Index, Ultimate Oscillator, Stochastic and Money Flow Index algorithms to produce signals enabling users to enter positions in ideal market conditions. The Litigator integrates the value delivered by the above four algorithms into one script.
This indicator is handy when trading continuation/reversal divergence strategies in conjunction with price action.
Uniqueness
The Litigator's uniqueness stands from integrating the above algorithms into the same visual area and leveraging preconfigured parameters suitable for short term scalping (1-5 minutes).
In addition, the Litigator allows configuring the above four algorithms in such a way to coordinate signals by colour-coding or shape thickness to aid the user with identifying any emerging patterns quicker.
Furthermore, Moonshot's uniqueness is also reflected in the way it has standardised the outputs of each algorithm to look and feel the same, and in doing so, enabling users to plug them in/out as needed. This also includes ensuring the ratios of the shapes are similar (applicable to the same scale).
Open-source
The indicator uses the following open-source scripts/algorithms:
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
Bogdan Ciocoiu - MoonshotDescription
Moonshot is an indicator that encapsulates the value delivered by the TSI, MACD, Awesome Oscillator and CCI algorithms to produce signals to enable users to enter positions in ideal market conditions. Moonshot integrates the value delivered by the above four algorithms into one script.
This indicator is particularly useful when trading continuation/reversal divergence strategies.
Uniqueness
The Moonshot's uniqueness stands from integrating the above algorithms into the same visual area and leveraging preconfigured parameters suitable for 1-3 minute scalping techniques.
In addition, Moonshot allows swapping or furthermore configuring the above four algorithms in such a way to align signals by colour-coding or shape thickness to aid the users with identifying any emerging patterns quicker.
Furthermore, Moonshot's uniqueness is also reflected in the way it has standardised the outputs of each algorithm to look and feel the same (including the scale at which the shapes are shown) and, in doing so, enables users to plug them in/out as needed.
Open-source
The indicator leverages the following open-source scripts/algorithms:
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
LengthAdaptationCollection of dynamic length adaptation algorithms. Mostly from various Adaptive Moving Averages (they are usually just EMA otherwise). Now you can combine Adaptations with any other Moving Averages or Oscillators (see my other libraries), to get something like Deviation Scaled RSI or Fractal Adaptive VWMA. This collection is not encyclopaedic. Suggestions are welcome.
chande(src, len, sdlen, smooth, power) Chande's Dynamic Length
Parameters:
src : Series to use
len : Reference lookback length
sdlen : Lookback length of Standard deviation
smooth : Smoothing length of Standard deviation
power : Exponent of the length adaptation (lower is smaller variation)
Returns: Calculated period
Taken from Chande's Dynamic Momentum Index (CDMI or DYMOI), which is dynamic RSI with this length
Original default power value is 1, but I use 0.5
A variant of this algorithm is also included, where volume is used instead of price
vidya(src, len, dynLow) Variable Index Dynamic Average Indicator (VIDYA)
Parameters:
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
Returns: Calculated period
Standard VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
I took the adaptation part, as it is just an EMA otherwise
vidyaRS(src, len, dynHigh) Relative Strength Dynamic Length - VIDYA RS
Parameters:
src : Series to use
len : Reference lookback length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on Vitali Apirine's modification (Stocks and Commodities, January 2022) of VIDYA algorithm. The period oscillates from the Upper Bound down (fast)
I took the adaptation part, as it is just an EMA otherwise
kaufman(src, len, dynLow, dynHigh) Kaufman Efficiency Scaling
Parameters:
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on Efficiency Ratio calculation orifinally used in Kaufman Adaptive Moving Average developed by Perry J. Kaufman
I took the adaptation part, as it is just an EMA otherwise
ds(src, len) Deviation Scaling
Parameters:
src : Series to use
len : Reference lookback length
Returns: Calculated period
Based on Derivation Scaled Super Smoother (DSSS) by John F. Ehlers
Originally used with Super Smoother
RMS originally has 50 bar lookback. Changed to 4x length for better flexibility. Could be wrong.
maa(src, len, threshold) Median Average Adaptation
Parameters:
src : Series to use
len : Reference lookback length
threshold : Adjustment threshold (lower is smaller length, default: 0.002, min: 0.0001)
Returns: Calculated period
Based on Median Average Adaptive Filter by John F. Ehlers
Discovered and implemented by @cheatcountry:
I took the adaptation part, as it is just an EMA otherwise
fra(len, fc, sc) Fractal Adaptation
Parameters:
len : Reference lookback length
fc : Fast constant (default: 1)
sc : Slow constant (default: 200)
Returns: Calculated period
Based on FRAMA by John F. Ehlers
Modified to allow lower and upper bounds by an unknown author
I took the adaptation part, as it is just an EMA otherwise
mama(src, dynLow, dynHigh) MESA Adaptation - MAMA Alpha
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on MESA Adaptive Moving Average by John F. Ehlers
Introduced in the September 2001 issue of Stocks and Commodities
Inspired by the @everget implementation:
I took the adaptation part, as it is just an EMA otherwise
doAdapt(type, src, len, dynLow, dynHigh, chandeSDLen, chandeSmooth, chandePower) Execute a particular Length Adaptation from the list
Parameters:
type : Length Adaptation type to use
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
chandeSDLen : Lookback length of Standard deviation for Chande's Dynamic Length
chandeSmooth : Smoothing length of Standard deviation for Chande's Dynamic Length
chandePower : Exponent of the length adaptation for Chande's Dynamic Length (lower is smaller variation)
Returns: Calculated period (float, not limited)
doMA(type, src, len) MA wrapper on wrapper: if DSSS is selected, calculate it here
Parameters:
type : MA type to use
src : Series to use
len : Filtering length
Returns: Filtered series
Demonstration of a combined indicator: Deviation Scaled Super Smoother
DMI + HMA - No Risk ManagementDMI (Directional Movement Index) and HMA (Hull Moving Average)
The DMI and HMA make a great combination, The DMI will gauge the market direction, while the HMA will add confirmation to the trend strength.
What is the DMI?
The DMI is an indicator that was developed by J. Welles Wilder in 1978. The Indicator was designed to identify in which direction the price is moving. This is done by comparing previous highs and lows and drawing 2 lines.
1. A Positive movement line
2. A Negative movement line
A third line can be added, which would be known as the ADX line or Average Directional Index. This can also be used to gauge the strength in which direction the market is moving.
When the Positive movement line (DI+) is above the Negative movement line (DI-) there is more upward pressure. Ofcourse visa versa, when the DI- is above the DI+ that would indicate more downwards pressure.
Want to know more about HMA? Check out one of our other published scripts
What is this strategy doing?
We are first waiting for the DMI to cross in our favoured direction, after that, we wait for the HMA to signal the entry. Without both conditions being true, no trade will be made.
Long Entries
1. DI+ crosses above DI-
2. HMA line 1 is above HMA line 2
Short Entries
1. DI- Crosses above DI+
2. HMA line 1 is below HMA lilne 2
Its as simple as that.
Conclusion
While this strategy does have its downsides, that can be reduced by adding some risk manegment into the script. In general the trade profitability is above average, And the max drawdown is at a minimum.
The settings have been optimised to suite BTCUSDT PERP markets. Though with small adjustments it can be used on many assets!
SMU Stock ThermometerThis script shows various technical indicators in a stacked vertical candle called Market Termometer.
It helps to see the price action in one single vertical column where the actual price moves up or down. So you can see the price change based on your custom setting levels.
I've been studying ALGO for over a year and made many live experiment trades long and shorts. So, I'm trying to find a way to see what is ALGos next move. If it sounds far-fetch, then you should see my other published scripts.
Here is example of how ALGo dance around old indicators, which is why I started creating a bunch of new indicators that ALGO doesn't know
Example:
Impact-driven-algorithm= Large volume masked as small volume to keep the price at desired level. So, your chart says overbought but market doesn't drop for days
Cost-driven-algorithm= Hedge fund buy every time at lower price and prevent others to buy low, moving up fast. Is like a clock with millisecond timing and ALGO owners know when to buy low and when to sell high
If you have a good idea, let me know so i can include it the future versions.
Enjoy and think outside the box, the only way to beat the ALGO
Dresteghamat-Multi timeframe Regime & Exhaustion**Dresteghamat-Multi timeframe Regime & Exhaustion**
This script is a custom decision-support dashboard that aggregates volatility, momentum, and structural data across multiple timeframes to filter market noise. It addresses the problem of "Analysis Paralysis" by automating the correlation between lower timeframe momentum and higher timeframe structure using a weighted scoring algorithm.
### 🔧 Methodology & Calculation Logic
The core engine does not simply overlay indicators; it normalizes their outputs into a unified score (-100 to +100). The logic is hidden (Protected) to preserve the proprietary weighting algorithm, but the underlying concepts are as follows:
**1. Adaptive Timeframe Selection (Context Engine)**
Instead of static monitoring, the script detects the user's current chart timeframe (`timeframe.multiplier`) and dynamically assigns two relevant Higher Timeframes (HTF) as anchors.
* *Logic:* If Current TF < 5min, the script analyzes 15m and 1H data. If Current TF < 1H, it shifts to 4H and Daily data. This ensures the analysis is contextually relevant.
**2. Regime & Volatility Filter (ATR Based)**
We use the Average True Range (ATR) to determine the market regime (Trend vs. Range).
* **Calculation:** We compare the current Swing Range (High-Low lookback) against a smoothed ATR. A high Ratio (> 2.0) indicates a Trend Regime, activating Trend-Following logic. A low ratio dampens the signals.
**3. Directional Bias (Structure + Flow)**
Direction is not determined by a single crossover. It is a fusion of:
* **Swing Structure:** Using `ta.pivothigh/low` to identify Higher Highs/Lower Lows.
* **Volume Flow:** Calculating the cumulative delta of candle bodies over a lookback period.
* **Micro-Bias:** A short-term (default 5-bar) momentum filter to detect immediate order flow changes.
**4. Exhaustion Logic (Mean Reversion Warning)**
To prevent buying at tops, the script calculates an "Exhaustion Score" based on:
* **RSI Divergence:** Detecting discrepancies between price peaks and momentum.
* **Volatility Extension:** Identifying when price has deviated significantly from its volatility mean (VRSD logic).
* **Volume Anomalies:** Detecting low volume on new highs (Supply absorption).
### 📊 How to Read the Dashboard
The table displays the raw status of each timeframe. The **"MODE"** row is the output of the algorithmic decision tree:
* **BUY/SELL ONLY:** Generated when the Current TF momentum aligns with the dynamically selected HTF structure AND the Exhaustion Score is below the threshold (default 70).
* **PULLBACK:** Triggered when the HTF Structure is bullish, but Current Momentum is bearish (indicating a corrective phase).
* **HTF EXHAUST:** A safety warning triggered when the HTF Volatility or RSI metrics hit extreme levels, overriding any entry signals.
* **WAIT:** Default state when volatility is low (Range Regime) or signals conflict.
### ⚠️ Disclaimer
This tool provides algorithmic analysis based on historical price action and volatility metrics. It does not guarantee future results.
T3 ATR [DCAUT]█ T3 ATR
📊 ORIGINALITY & INNOVATION
The T3 ATR indicator represents an important enhancement to the traditional Average True Range (ATR) indicator by incorporating the T3 (Tilson Triple Exponential Moving Average) smoothing algorithm. While standard ATR uses fixed RMA (Running Moving Average) smoothing, T3 ATR introduces a configurable volume factor parameter that allows traders to adjust the smoothing characteristics from highly responsive to heavily smoothed output.
This innovation addresses a fundamental limitation of traditional ATR: the inability to adapt smoothing behavior without changing the calculation period. With T3 ATR, traders can maintain a consistent ATR period while adjusting the responsiveness through the volume factor, making the indicator adaptable to different trading styles, market conditions, and timeframes through a single unified implementation.
The T3 algorithm's triple exponential smoothing with volume factor control provides improved signal quality by reducing noise while maintaining better responsiveness compared to traditional smoothing methods. This makes T3 ATR particularly valuable for traders who need to adapt their volatility measurement approach to varying market conditions without switching between multiple indicator configurations.
📐 MATHEMATICAL FOUNDATION
The T3 ATR calculation process involves two distinct stages:
Stage 1: True Range Calculation
The True Range (TR) is calculated using the standard formula:
TR = max(high - low, |high - close |, |low - close |)
This captures the greatest of the current bar's range, the gap from the previous close to the current high, or the gap from the previous close to the current low, providing a comprehensive measure of price movement that accounts for gaps and limit moves.
Stage 2: T3 Smoothing Application
The True Range values are then smoothed using the T3 algorithm, which applies six exponential moving averages in succession:
First Layer: e1 = EMA(TR, period), e2 = EMA(e1, period)
Second Layer: e3 = EMA(e2, period), e4 = EMA(e3, period)
Third Layer: e5 = EMA(e4, period), e6 = EMA(e5, period)
Final Calculation: T3 = c1×e6 + c2×e5 + c3×e4 + c4×e3
The coefficients (c1, c2, c3, c4) are derived from the volume factor (VF) parameter:
a = VF / 2
c1 = -a³
c2 = 3a² + 3a³
c3 = -6a² - 3a - 3a³
c4 = 1 + 3a + a³ + 3a²
The volume factor parameter (0.0 to 1.0) controls the weighting of these coefficients, directly affecting the balance between responsiveness and smoothness:
Lower VF values (approaching 0.0): Coefficients favor recent data, resulting in faster response to volatility changes with minimal lag but potentially more noise
Higher VF values (approaching 1.0): Coefficients distribute weight more evenly across the smoothing layers, producing smoother output with reduced noise but slightly increased lag
📊 COMPREHENSIVE SIGNAL ANALYSIS
Volatility Level Interpretation:
High Absolute Values: Indicate strong price movements and elevated market activity, suggesting larger position risks and wider stop-loss requirements, often associated with trending markets or significant news events
Low Absolute Values: Indicate subdued price movements and quiet market conditions, suggesting smaller position risks and tighter stop-loss opportunities, often associated with consolidation phases or low-volume periods
Rapid Increases: Sharp spikes in T3 ATR often signal the beginning of significant price moves or market regime changes, providing early warning of increased trading risk
Sustained High Levels: Extended periods of elevated T3 ATR indicate sustained trending conditions with persistent volatility, suitable for trend-following strategies
Sustained Low Levels: Extended periods of low T3 ATR indicate range-bound conditions with suppressed volatility, suitable for mean-reversion strategies
Volume Factor Impact on Signals:
Low VF Settings (0.0-0.3): Produce responsive signals that quickly capture volatility changes, suitable for short-term trading but may generate more frequent color changes during minor fluctuations
Medium VF Settings (0.4-0.7): Provide balanced signal quality with moderate responsiveness, filtering out minor noise while capturing significant volatility changes, suitable for swing trading
High VF Settings (0.8-1.0): Generate smooth, stable signals that filter out most noise and focus on major volatility trends, suitable for position trading and long-term analysis
🎯 STRATEGIC APPLICATIONS
Position Sizing Strategy:
Determine your risk per trade (e.g., 1% of account capital - adjust based on your risk tolerance and experience)
Decide your stop-loss distance multiplier (e.g., 2.0x T3 ATR - this varies by market and strategy, test different values)
Calculate stop-loss distance: Stop Distance = Multiplier × Current T3 ATR
Calculate position size: Position Size = (Account × Risk %) / Stop Distance
Example: $10,000 account, 1% risk, T3 ATR = 50 points, 2x multiplier → Position Size = ($10,000 × 0.01) / (2 × 50) = $100 / 100 points = 1 unit per point
Important: The ATR multiplier (1.5x - 3.0x) should be determined through backtesting for your specific instrument and strategy - using inappropriate multipliers may result in stops that are too tight (frequent stop-outs) or too wide (excessive losses)
Adjust the volume factor to match your trading style: lower VF for responsive stop distances in short-term trading, higher VF for stable stop distances in position trading
Dynamic Stop-Loss Placement:
Determine your risk tolerance multiplier (typically 1.5x to 3.0x T3 ATR)
For long positions: Set stop-loss at entry price minus (multiplier × current T3 ATR value)
For short positions: Set stop-loss at entry price plus (multiplier × current T3 ATR value)
Trail stop-losses by recalculating based on current T3 ATR as the trade progresses
Adjust the volume factor based on desired stop-loss stability: higher VF for less frequent adjustments, lower VF for more adaptive stops
Market Regime Identification:
Calculate a reference volatility level using a longer-period moving average of T3 ATR (e.g., 50-period SMA)
High Volatility Regime: Current T3 ATR significantly above reference (e.g., 120%+) - favor trend-following strategies, breakout trades, and wider targets
Normal Volatility Regime: Current T3 ATR near reference (e.g., 80-120%) - employ standard trading strategies appropriate for prevailing market structure
Low Volatility Regime: Current T3 ATR significantly below reference (e.g., <80%) - favor mean-reversion strategies, range trading, and prepare for potential volatility expansion
Monitor T3 ATR trend direction and compare current values to recent history to identify regime transitions early
Risk Management Implementation:
Establish your maximum portfolio heat (total risk across all positions, typically 2-6% of capital)
For each position: Calculate position size using the formula Position Size = (Account × Individual Risk %) / (ATR Multiplier × Current T3 ATR)
When T3 ATR increases: Position sizes automatically decrease (same risk %, larger stop distance = smaller position)
When T3 ATR decreases: Position sizes automatically increase (same risk %, smaller stop distance = larger position)
This approach maintains constant dollar risk per trade regardless of market volatility changes
Use consistent volume factor settings across all positions to ensure uniform risk measurement
📋 DETAILED PARAMETER CONFIGURATION
ATR Length Parameter:
Default Setting: 14 periods
This is the standard ATR calculation period established by Welles Wilder, providing balanced volatility measurement that captures both short-term fluctuations and medium-term trends across most markets and timeframes
Selection Principles:
Shorter periods increase sensitivity to recent volatility changes and respond faster to market shifts, but may produce less stable readings
Longer periods emphasize sustained volatility trends and filter out short-term noise, but respond more slowly to genuine regime changes
The optimal period depends on your holding time, trading frequency, and the typical volatility cycle of your instrument
Consider the timeframe you trade: Intraday traders typically use shorter periods, swing traders use intermediate periods, position traders use longer periods
Practical Approach:
Start with the default 14 periods and observe how well it captures volatility patterns relevant to your trading decisions
If ATR seems too reactive to minor price movements: Increase the period until volatility readings better reflect meaningful market changes
If ATR lags behind obvious volatility shifts that affect your trades: Decrease the period for faster response
Match the period roughly to your typical holding time - if you hold positions for N bars, consider ATR periods in a similar range
Test different periods using historical data for your specific instrument and strategy before committing to live trading
T3 Volume Factor Parameter:
Default Setting: 0.7
This setting provides a reasonable balance between responsiveness and smoothness for most market conditions and trading styles
Understanding the Volume Factor:
Lower values (closer to 0.0) reduce smoothing, allowing T3 ATR to respond more quickly to volatility changes but with less noise filtering
Higher values (closer to 1.0) increase smoothing, producing more stable readings that focus on sustained volatility trends but respond more slowly
The trade-off is between immediacy and stability - there is no universally optimal setting
Selection Principles:
Match to your decision speed: If you need to react quickly to volatility changes for entries/exits, use lower VF; if you're making longer-term risk assessments, use higher VF
Match to market character: Noisier, choppier markets may benefit from higher VF for clearer signals; cleaner trending markets may work well with lower VF for faster response
Match to your preference: Some traders prefer responsive indicators even with occasional false signals, others prefer stable indicators even with some delay
Practical Adjustment Guidelines:
Start with default 0.7 and observe how T3 ATR behavior aligns with your trading needs over multiple sessions
If readings seem too unstable or noisy for your decisions: Try increasing VF toward 0.9-1.0 for heavier smoothing
If the indicator lags too much behind volatility changes you care about: Try decreasing VF toward 0.3-0.5 for faster response
Make meaningful adjustments (0.2-0.3 changes) rather than small increments - subtle differences are often imperceptible in practice
Test adjustments in simulation or paper trading before applying to live positions
📈 PERFORMANCE ANALYSIS & COMPETITIVE ADVANTAGES
Responsiveness Characteristics:
The T3 smoothing algorithm provides improved responsiveness compared to traditional RMA smoothing used in standard ATR. The triple exponential design with volume factor control allows the indicator to respond more quickly to genuine volatility changes while maintaining the ability to filter noise through appropriate VF settings. This results in earlier detection of volatility regime changes compared to standard ATR, particularly valuable for risk management and position sizing adjustments.
Signal Stability:
Unlike simple smoothing methods that may produce erratic signals during transitional periods, T3 ATR's multi-layer exponential smoothing provides more stable signal progression. The volume factor parameter allows traders to tune signal stability to their preference, with higher VF settings producing remarkably smooth volatility profiles that help avoid overreaction to temporary market fluctuations.
Comparison with Standard ATR:
Adaptability: T3 ATR allows adjustment of smoothing characteristics through the volume factor without changing the ATR period, whereas standard ATR requires changing the period length to alter responsiveness, potentially affecting the fundamental volatility measurement
Lag Reduction: At lower volume factor settings, T3 ATR responds more quickly to volatility changes than standard ATR with equivalent periods, providing earlier signals for risk management adjustments
Noise Filtering: At higher volume factor settings, T3 ATR provides superior noise filtering compared to standard ATR, producing cleaner signals for long-term analysis without sacrificing volatility measurement accuracy
Flexibility: A single T3 ATR configuration can serve multiple trading styles by adjusting only the volume factor, while standard ATR typically requires multiple instances with different periods for different trading applications
Suitable Use Cases:
T3 ATR is well-suited for the following scenarios:
Dynamic Risk Management: When position sizing and stop-loss placement need to adapt quickly to changing volatility conditions
Multi-Style Trading: When a single volatility indicator must serve different trading approaches (day trading, swing trading, position trading)
Volatile Markets: When standard ATR produces too many false volatility signals during choppy conditions
Systematic Trading: When algorithmic systems require a single, configurable volatility input that can be optimized for different instruments
Market Regime Analysis: When clear identification of volatility expansion and contraction phases is critical for strategy selection
Known Limitations:
Like all technical indicators, T3 ATR has limitations that users should understand:
Historical Nature: T3 ATR is calculated from historical price data and cannot predict future volatility with certainty
Smoothing Trade-offs: The volume factor setting involves a trade-off between responsiveness and smoothness - no single setting is optimal for all market conditions
Extreme Events: During unprecedented market events or gaps, T3 ATR may not immediately reflect the full scope of volatility until sufficient data is processed
Relative Measurement: T3 ATR values are most meaningful in relative context (compared to recent history) rather than as absolute thresholds
Market Context Required: T3 ATR measures volatility magnitude but does not indicate price direction or trend quality - it should be used in conjunction with directional analysis
Performance Expectations:
T3 ATR is designed to help traders measure and adapt to changing market volatility conditions. When properly configured and applied:
It can help reduce position risk during volatile periods through appropriate position sizing
It can help identify optimal times for more aggressive position sizing during stable periods
It can improve stop-loss placement by adapting to current market conditions
It can assist in strategy selection by identifying volatility regimes
However, volatility measurement alone does not guarantee profitable trading. T3 ATR should be integrated into a comprehensive trading approach that includes directional analysis, proper risk management, and sound trading psychology.
USAGE NOTES
This indicator is designed for technical analysis and educational purposes. T3 ATR provides adaptive volatility measurement but has limitations and should not be used as the sole basis for trading decisions. The indicator measures historical volatility patterns, and past volatility characteristics do not guarantee future volatility behavior. Market conditions can change rapidly, and extreme events may produce volatility readings that fall outside historical norms.
Traders should combine T3 ATR with directional analysis tools, support/resistance analysis, and other technical indicators to form a complete trading strategy. Proper backtesting and forward testing with appropriate risk management is essential before applying T3 ATR-based strategies to live trading. The volume factor parameter should be optimized for specific instruments and trading styles through careful testing rather than assuming default settings are optimal for all applications.






















