iBagging Multi-IndicatorHello traders!
You know, machine learning is a very popular theme nowadays. The best tricks and methods were borrowed from Math and Computer Science to improve and create ML algorithms. As you know, one of our analysts is a great fan of ML, thus he decided to borrow on very powerful method from ML.
We have taken 5 indicators, tuned them a bit and make them to vote. If the number of voters is more than the threshold the the bullish/bearish signal shows. It's called Bagging, when some algorithms are voting to classify or to regress. We use EMA Cross with NATR filter, BB Width, divergency detector and bull bear power. This bundle in my opinion is one of the best to define entries. Check it up in you daily trading staff. You shouldn't forget about tuning the parameters on different coins and timeframes. We checked it on 1H BTCUSDT and default parameters are for this combination. I hope you'll enjoy my masterpiece.
How to use it?
Just add it to the chart and check up signals.
Cari dalam skrip untuk "algo"
Trend Follow SystemTrend following algorithm:
We take 1- 5 Fibonacci Ema values. 21, 34, 55, 89, 144
2- We normalize the changes of these values over time between 1-100.
3- We take the ema value of 1 length so that it does not follow a horizontal course after the normalization process.
4- In order not to experience too much change, we take the value of sma with a length of 5.
5-We think that when all values are 100, the trend is up, when all values are 0, the trend is down, otherwise the trend is horizontal.
[blackcat] L2 Ehlers Adaptive Jon Andersen R-Squared IndicatorLevel: 2
Background
@pips_v1 has proposed an interesting idea that is it possible to code an "Adaptive Jon Andersen R-Squared Indicator" where the length is determined by DCPeriod as calculated in Ehlers Sine Wave Indicator? I agree with him and starting to construct this indicator. After a study, I found "(blackcat) L2 Ehlers Autocorrelation Periodogram" script could be reused for this purpose because Ehlers Autocorrelation Periodogram is an ideal candidate to calculate the dominant cycle. On the other hand, there are two inputs for R-Squared indicator:
Length - number of bars to calculate moment correlation coefficient R
AvgLen - number of bars to calculate average R-square
I used Ehlers Autocorrelation Periodogram to produced a dynamic value of "Length" of R-Squared indicator and make it adaptive.
Function
One tool available in forecasting the trendiness of the breakout is the coefficient of determination (R-squared), a statistical measurement. The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average.
When the R-squared is at an extreme low, indicating that the mean is a better predictor than regression, it can only increase, indicating that the regression is becoming a better predictor than the mean. The opposite is true for extreme high values of the R-squared.
To make this indicator adaptive, the dominant cycle is extracted from the spectral estimate in the next block of code using a center-of-gravity ( CG ) algorithm. The CG algorithm measures the average center of two-dimensional objects. The algorithm computes the average period at which the powers are centered. That is the dominant cycle. The dominant cycle is a value that varies with time. The spectrum values vary between 0 and 1 after being normalized. These values are converted to colors. When the spectrum is greater than 0.5, the colors combine red and yellow, with yellow being the result when spectrum = 1 and red being the result when the spectrum = 0.5. When the spectrum is less than 0.5, the red saturation is decreased, with the result the color is black when spectrum = 0.
Construction of the autocorrelation periodogram starts with the autocorrelation function using the minimum three bars of averaging. The cyclic information is extracted using a discrete Fourier transform (DFT) of the autocorrelation results. This approach has at least four distinct advantages over other spectral estimation techniques. These are:
1. Rapid response. The spectral estimates start to form within a half-cycle period of their initiation.
2. Relative cyclic power as a function of time is estimated. The autocorrelation at all cycle periods can be low if there are no cycles present, for example, during a trend. Previous works treated the maximum cycle amplitude at each time bar equally.
3. The autocorrelation is constrained to be between minus one and plus one regardless of the period of the measured cycle period. This obviates the need to compensate for Spectral Dilation of the cycle amplitude as a function of the cycle period.
4. The resolution of the cyclic measurement is inherently high and is independent of any windowing function of the price data.
Key Signal
DC --> Ehlers dominant cycle.
AvgSqrR --> R-squared output of the indicator.
Remarks
This is a Level 2 free and open source indicator.
Feedbacks are appreciated.
Fibonacci Extension / Retracement / Pivot Points by DGTFɪʙᴏɴᴀᴄᴄɪ Exᴛᴇɴᴛɪᴏɴ / Rᴇᴛʀᴀᴄᴍᴇɴᴛ / Pɪᴠᴏᴛ Pᴏɪɴᴛꜱ
This study combines various Fibonacci concepts into one, and some basic volume and volatility indications
█ Pɪᴠᴏᴛ Pᴏɪɴᴛꜱ — is a technical indicator that is used to determine the levels at which price may face support or resistance. The Pivot Points indicator consists of a pivot point (PP) level and several support (S) and resistance (R) levels. PP, resistance and support values are calculated in different ways, depending on the type of the indicator, this study implements Fibonacci Pivot Points
The indicator resolution is set by the input of the Pivot Points TF (Timeframe). If the Pivot Points TF is set to AUTO (the default value), then the increased resolution is determined by the following algorithm:
for intraday resolutions up to and including 5 min, 4HOURS (4H) is used
for intraday resolutions more than 5 min and up to and including 45 min, DAY (1D) is used
for intraday resolutions more than 45 min and up to and including 4 hour, WEEK (1W) is used
for daily resolutions MONTH is used (1M)
for weekly resolutions, 3-MONTH (3M) is used
for monthly resolutions, 12-MONTH (12M) is used
If the Pivot Points TF is set to User Defined, users may choose any higher timeframe of their preference
█ Fɪʙ Rᴇᴛʀᴀᴄᴇᴍᴇɴᴛ — Fibonacci retracements is a popular instrument used by technical analysts to determine support and resistance areas. In technical analysis, this tool is created by taking two extreme points (usually a peak and a trough) on the chart and dividing the vertical distance by the key Fibonacci coefficients equal to 23.6%, 38.2%, 50%, 61.8%, and 100%. This study implements an automated method of identifying the pivot lows/highs and automatically draws horizontal lines that are used to determine possible support and resistance levels
█ Fɪʙᴏɴᴀᴄᴄɪ Exᴛᴇɴꜱɪᴏɴꜱ — Fibonacci extensions are a tool that traders can use to establish profit targets or estimate how far a price may travel AFTER a retracement/pullback is finished. Extension levels are also possible areas where the price may reverse. This study implements an automated method of identifying the pivot lows/highs and automatically draws horizontal lines that are used to determine possible support and resistance levels.
IMPORTANT NOTE: Fibonacci extensions option may require to do further adjustment of the study parameters for proper usage. Extensions are aimed to be used when a trend is present and they aim to measure how far a price may travel AFTER a retracement/pullback. I will strongly suggest users of this study to check the education post for further details, where to use extensions and where to use retracements
Important input options for both Fibonacci Extensions and Retracements
Deviation, is a multiplier that affects how much the price should deviate from the previous pivot in order for the bar to become a new pivot. Increasing its value is one way to get higher timeframe Fib Retracement Levels
Depth, affects the minimum number of bars that will be taken into account when building
█ Volume / Volatility Add-Ons
High Volatile Bar Indication
Volume Spike Bar Indication
Volume Weighted Colored Bars
This study benefits from build-in auto fib retracement tv study and modifications applied to get extentions and also to fit this combo
Disclaimer:
Trading success is all about following your trading strategy and the indicators should fit within your trading strategy, and not to be traded upon solely
The script is for informational and educational purposes only. Use of the script does not constitute professional and/or financial advice. You alone have the sole responsibility of evaluating the script output and risks associated with the use of the script. In exchange for using the script, you agree not to hold dgtrd TradingView user liable for any possible claim for damages arising from any decision you make based on use of the script
(IK) Base Break BuyThis strategy first calculates areas of support (bases), and then enters trades if that support is broken. The idea is to profit off of retracement. Dollar-cost-averaging safety orders are key here. This strategy takes into account a .1% commission, and tests are done with an initial capital of 100.00 USD. This only goes long.
The strategy is highly customizable. I've set the default values to suit ETH/USD 15m. If you're trading this on another ticker or timeframe, make sure to play around with the settings. There is an explanation of each input in the script comments. I found this to be profitable across most 'common sense' values for settings, but tweaking led to some pretty promising results. I leaned more towards high risk/high trade volume.
Always remember though: historical performance is no guarantee of future behavior . Keep settings within your personal risk tolerance, even if it promises better profit. Anyone can write a 100% profitable script if they assume price always eventually goes up.
Check the script comments for more details, but, briefly, you can customize:
-How many bases to keep track of at once
-How those bases are calculated
-What defines a 'base break'
-Order amounts
-Safety order count
-Stop loss
Here's the basic algorithm:
-Identify support.
--Have previous candles found bottoms in the same area of the current candle bottom?
--Is this support unique enough from other areas of support?
-Determine if support is broken.
--Has the price crossed under support quickly and with certainty?
-Enter trade with a percentage of initial capital.
-Execute safety orders if price continues to drop.
-Exit trade at profit target or stop loss.
Take profit is dynamic and calculated on order entry. The bigger the 'break', the higher your take profit percentage. This target percentage is based on average position size, so as safety orders are filled, and average position size comes down, the target profit becomes easier to reach.
Stop loss can be calculated one of two ways, either a static level based on initial entry, or a dynamic level based on average position size. If you use the latter (default), be aware, your real losses will be greater than your stated stop loss percentage . For example:
-stop loss = 15%, capital = 100.00, safety order threshold = 10%
-you buy $50 worth of shares at $1 - price average is $1
-you safety $25 worth of shares at $0.9 - price average is $0.966
-you safety $25 worth of shares at $0.8. - price average is $0.925
-you get stopped out at 0.925 * (1-.15) = $0.78625, and you're left with $78.62.
This is a realized loss of ~21.4% with a stop loss set to 15%. The larger your safety order threshold, the larger your real loss in comparison to your stop loss percentage, and vice versa.
Indicator plots show the calculated bases in white. The closest base below price is yellow. If that base is broken, it turns purple. Once a trade is entered, profit target is shown in silver and stop loss in red.
(IK) Grid ScriptThis is my take on a grid trading strategy. From Investopedia:
"Grid trading is most commonly associated with the foreign exchange market. Overall the technique seeks to capitalize on normal price volatility in an asset by placing buy and sell orders at certain regular intervals above and below a predefined base price."
This strategy is best used on sideways markets, without a definitive up or down major trend. Because it doesn't rely on huge vertical movement, this strategy is great for small timeframes. It only goes long. I've set initial_capital to 100 USD. default_qty_value should be your initial capital divided by your amount of grid lines. I'm also assuming a 0.1% commission per trade.
Here's the basic algorithm:
- Create a grid based on an upper-bound (strong resistance) and a lower-bound (strong support)
- Grid lines are spaced evenly between these two bounds. (I recommend anywhere between 5-10 grid lines, but this script lets you use up to 15. More gridlines = more/smaller trades)
- Identify nearest gridline above and below current price (ignoring the very closest grid line)
- If price crosses under a near gridline, buy and recalculate near gridlines
- If price crosses over a near gridline, sell and recalculate near gridlines
- Trades are entered and exited based on a FIFO system. So if price falls 3 grid lines (buy-1, buy-2, buy-3), and subsequently crosses above one grid line, only the first trade will exit (sell-1). If it falls again, it will enter a new trade (buy-4), and if it crosses above again it will sell the original second trade (sell-2). The amount of trades you can be in at once are based on the amount of grid lines you have.
This strategy has no built-in stop loss! This is not a 'set-it-and-forget-it" script. Make sure that price remains within the bounds of your grid. If prices exits above the grid, you're in the money, but you won't be making any more trades. If price exits below the grid, you're 100% staked in whatever you happen to be trading.
This script is more complicated than my last one, but should be more user friendly. Make sure to correctly set your lower-bound and upper-bound based on strong support and resistance (the default values for these are probably going to be meaningless). If you change your "Grid Quantity" (amount of grid lines) make sure to also change your 'Order Size' property under settings for proper test results (or default_qty_value in the strategy() declaration).
Repeated Median Regression ChannelThis script uses the Repeated Median (RM) estimator to construct a linear regression channel and thus offers an alternative to the available codes based on ordinary least squares.
The RM estimator is a robust linear regression algorithm. It was proposed by Siegel in 1982 (1) and has since found many applications in science and engineering for linear trend estimation and data filtering.
The key difference between RM and ordinary least squares methods is that the slope of the RM line is significantly less affected by data points that deviate strongly from the established trend. In statistics, these points are usually called outliers, while in the context of price data, they are associated with gaps, reversals, breaks from the trading range. Thus, robustness to outlier means that the nascent deviation from a predetermined trend will be more clearly seen in the RM regression compared to the least-squares estimate. For the same reason, the RM model is expected to better depict gaps and trend changes (2).
Input Description
Length : Determines the length of the regression line.
Channel Multiplier : Determines the channel width in units of root-mean-square deviation.
Show Channel : If switched off , only the (central) regression line is displayed.
Show Historical Broken Channel : If switched on , the channels that were broken in the past are displayed. Note that a certain historical broken channel is shown only when at least Length / 2 bars have passed since the last historical broken channel.
Print Slope : Displays the value of the current RM slope on the graph.
Method
Calculation of the RM regression line is done as follows (1,3):
For each sample point ( t (i), y (i)) with i = 1.. Length , the algorithm calculates the median of all the slopes of the lines connecting this point to the other Length -1 points.
The regression slope is defined as the median of the set of these median slopes.
The regression intercept is defined as the median of the set { y (i) – m * t (i)}.
Computational Time
The present implementation utilizes a brute-force algorithm for computing the RM-slope that takes O ( Length ^2) time. Therefore, the calculation of the historical broken channels might take a relatively long time (depending on the Length parameter). However, when the Show Historical Broken Channel option is off, only the real-time RM channel is calculated, and this is done quite fast.
References
1. A. F. Siegel (1982), Robust regression using repeated medians, Biometrika, 69 , 242–244.
2. P. L. Davies, R. Fried, and U. Gather (2004), Robust signal extraction for on-line monitoring data, Journal of Statistical Planning and Inference 122 , 65-78.
3. en.wikipedia.org
Tic Tac Toe (For Fun)Hello All,
I think all of you know the game "Tic Tac Toe" :) This time I tried to make this game, and also I tried to share an example to develop a game script in Pine. Just for fun ;)
Tic Tac Toe Game Rules:
1. The game is played on a grid that's 3 squares by 3 squares.
2. You are "O", the computer is X. Players take turns putting their marks in empty squares.
3. if a player makes 3 of her marks in a row (up, down, across, or diagonally) the he is the winner.
4. When all 9 squares are full, the game is over (draw)
So, how to play the game?
- The player/you can play "O", meaning your mark is "O", so Xs for the script. please note that: The script plays with ONLY X
- There is naming for all squears, A1, A2, A3, B1, B2, B3, C1, C2, C3. you will see all these squares in the options.
- also You can set who will play first => "Human" or "Computer"
if it's your turn to move then you will see "You Move" text, as seen in the following screenshot. for example you want to put "O" to "A1" then using options set A1 as O
How the script play?
it uses MinMax algorithm with constant depth = 4. And yes we don't have option to make recursive functions in Pine at the moment so I made four functions for each depth. this idea can be used in your scripts if you need such an algorithm. if you have no idea about MinMax algorithm you can find a lot of articles on the net :)
The script plays its move automatically if its turn to play. you will just need to set the option that computer played (A1, C3, etc)
if it's computer turn to play then it calculates and show the move it wants to play like "My Move : B3 <= X" then using options you need to set B3 as X
Also it checks if the board is valid or not:
I have tested it but if you see any bug let me know please
Enjoy!
Max Drawdown Calculating Functions (Optimized)Maximum Drawdown and Maximum Relative Drawdown% calculating functions.
I needed a way to calculate the maxDD% of a serie of datas from an array (the different values of my balance account). I didn't find any builtin pinescript way to do it, so here it is.
There are 2 algorithms to calculate maxDD and relative maxDD%, one non optimized needs n*(n - 1)/2 comparisons for a collection of n datas, the other one only needs n-1 comparisons.
In the example we calculate the maxDDs of the last 10 close values.
There a 2 functions : "maximum_relative_drawdown" and "maximum_dradown" (and "optimized_maximum_relative_drawdown" and "optimized_maximum_drawdown") with names speaking for themselves.
Input : an array of floats of arbitrary size (the values we want the DD of)
Output : an array of 4 values
I added the iteration number just for fun.
Basically my script is the implementation of these 2 algos I found on the net :
var peak = 0;
var n = prices.length
for (var i = 1; i < n; i++){
dif = prices - prices ;
peak = dif < 0 ? i : peak;
maxDrawdown = maxDrawdown > dif ? maxDrawdown : dif;
}
var n = prices.length
for (var i = 0; i < n; i++){
for (var j = i + 1; j < n; j++){
dif = prices - prices ;
maxDrawdown = maxDrawdown > dif ? maxDrawdown : dif;
}
}
Feel free to use it.
@version=4
[blackcat] L2 Ehlers Autocorrelation PeriodogramLevel: 2
Background
John F. Ehlers introduced Autocorrelation Periodogram in his "Cycle Analytics for Traders" chapter 8 on 2013.
Function
Construction of the autocorrelation periodogram starts with the autocorrelation function using the minimum three bars of averaging. The cyclic information is extracted using a discrete Fourier transform (DFT) of the autocorrelation results. This approach has at least four distinct advantages over other spectral estimation techniques. These are:
1. Rapid response. The spectral estimates start to form within a half-cycle period of their initiation.
2. Relative cyclic power as a function of time is estimated. The autocorrelation at all cycle periods can be low if there are no cycles present, for example, during a trend. Previous works treated the maximum cycle amplitude at each time bar equally.
3. The autocorrelation is constrained to be between minus one and plus one regardless of the period of the measured cycle period. This obviates the need to compensate for Spectral Dilation of the cycle amplitude as a function of the cycle period.
4. The resolution of the cyclic measurement is inherently high and is independent of any windowing function of the price data.
The dominant cycle is extracted from the spectral estimate in the next block of code using a center-of-gravity (CG) algorithm. The CG algorithm measures the average center of two-dimensional objects. The algorithm computes the average period at which the powers are centered. That is the dominant cycle. The dominant cycle is a value that varies with time. The spectrum values vary between 0 and 1 after being normalized. These values are converted to colors. When the spectrum is greater than 0.5, the colors combine red and yellow, with yellow being the result when spectrum = 1 and red being the result when the spectrum = 0.5. When the spectrum is less than 0.5, the red saturation is decreased, with the result the color is black when spectrum = 0.
Key Signal
DominantCycle --> Dominant Cycle
Period --> Autocorrelation Periodogram Array
Pros and Cons
100% John F. Ehlers definition translation of original work, even variable names are the same. This help readers who would like to use pine to read his book. If you had read his works, then you will be quite familiar with my code style.
Remarks
The 49th script for Blackcat1402 John F. Ehlers Week publication.
Courtesy of @RicardoSantos for RGB functions.
Readme
In real life, I am a prolific inventor. I have successfully applied for more than 60 international and regional patents in the past 12 years. But in the past two years or so, I have tried to transfer my creativity to the development of trading strategies. Tradingview is the ideal platform for me. I am selecting and contributing some of the hundreds of scripts to publish in Tradingview community. Welcome everyone to interact with me to discuss these interesting pine scripts.
The scripts posted are categorized into 5 levels according to my efforts or manhours put into these works.
Level 1 : interesting script snippets or distinctive improvement from classic indicators or strategy. Level 1 scripts can usually appear in more complex indicators as a function module or element.
Level 2 : composite indicator/strategy. By selecting or combining several independent or dependent functions or sub indicators in proper way, the composite script exhibits a resonance phenomenon which can filter out noise or fake trading signal to enhance trading confidence level.
Level 3 : comprehensive indicator/strategy. They are simple trading systems based on my strategies. They are commonly containing several or all of entry signal, close signal, stop loss, take profit, re-entry, risk management, and position sizing techniques. Even some interesting fundamental and mass psychological aspects are incorporated.
Level 4 : script snippets or functions that do not disclose source code. Interesting element that can reveal market laws and work as raw material for indicators and strategies. If you find Level 1~2 scripts are helpful, Level 4 is a private version that took me far more efforts to develop.
Level 5 : indicator/strategy that do not disclose source code. private version of Level 3 script with my accumulated script processing skills or a large number of custom functions. I had a private function library built in past two years. Level 5 scripts use many of them to achieve private trading strategy.
BuyTheDipWell, I often had arguments in online forum with a guy who claimed to time the market perfectly without any technical analysis or prior experience. He often claimed that technical analysis does not work and it only works when you trade on other's emotions. He also argued that algorithmic trading isn't profitable - if so, everyone would do that. Hence, I thought I will convert his idea to algorithm.
In his own words, the strategy is as below:
Chose an instrument which is in full uptrend.
Wait for the panic sell and buy the dip
Once market recovers back exit immediately
It seems to do just fine with indexes. But, not so good when it comes to stocks.
Trend-Range IdentifierTrend trading algorithms fail in ranging market and Swing trading algorithm fail in trending market. Purpose of this indicator is to identify if the instrument is trending or ranging so that you can apply appropriate trading algorithm for the market.
Process:
ATR is calculated based on the input parameter atrLength
Range/Channel containing upLine and downLine is calculated by adding/subtracting atrMultiplier * atr to close price.
This range/channel will remain same until the price breaks either upLine or downLine.
Once price crosses one among upLine and downLine, then new upLine/downLine is calculated based on latest close price.
If price breaks upLine, the trend is considered to be up until the next line break or no lines are broken for rangeLength bars. During this state, candles are colored in lime and upLine/downLine are colored in green.
If price breaks downLine, the trend is considered to be down until the next line break or no lines are broken for rangeLength bars. During this state, candles are colored in orange and upLine/downLine are colored in red.
If close price does not break either upLine or downLine for rangeLength bars, then the instrument is considered to be in range. During this state, candles are colored in silver and upLine/downLine are colored in purple.
In ranging duration, we display one among Keltner Channel, Bollinger Band or Donchian Band as per input parameter : rangeChannel . Other parameters used for calculation are rangeLength and stdDev
I have not fully optimized parameters. Suggestions and feedback welcome.
Dynamic Dots Dashboard (a Cloud/ZLEMA Composite)The purpose of this indicator is to provide an easy-to-read binary dashboard of where the current price is relative to key dynamic supports and resistances. The concept is simple, if a dynamic s/r is currently acting as a resistance, the indicator plots a dot above the histogram in the red box. If a dynamic s/r is acting as support, a dot is plotted in the green box below.
There are some additional features, but the dot graphs are king.
_______________________________________________________________________________________________________________
KEY:
_______________________________________________________________________________________________________________
Currently the dynamic s/r's being used in the dot plots are:
Ichimoku Cloud:
Tenkan (blue)
Kijun (pink)
Senkou A (red)
Senkou B (green)
ZLEMA (Zero Lag Exponential Moving Average)
99 ZLEMA (lavender)
200 ZLEMA (salmon)
You'll see a dashed line through the middle of the resistances section (red) and supports section (green). Cloud indicators are plotted above the dashed line, and ZLEMA's are below.
_______________________________________________________________________________________________________________
How it Works - Visual
_______________________________________________________________________________________________________________
As stated in the intro - if a dynamic s/r is currently above the current price and acting as a resistance, the indicator plots a dot above the histogram in the red box. If a dynamic s/r is acting as support, a dot is plotted in the green box below. Additionally, there is an optional histogram (default is on) that will further visualize this relationship. The histogram is a simple summation of the resistances above and the supports below.
Here's a visual to assist with what that means. This chart includes all of those dynamic s/r's in the dynamic dot dashboard (the on-chart parts are individually added, not part of this tool).
You can see that as a dynamic support is lost, the corresponding dot is moved from the supports section at the bottom (green), to the resistances section at the top (red). The opposite being true as resistances are being overtaken (broken resistances are moved to the support section (red)). You can see that the raw chart is just... a mess. Which kinda of accentuates one of the key goals of this indicator: to get all that dynamic support info without a mess of a chart like that.
_______________________________________________________________________________________________________________
How To Use It
_______________________________________________________________________________________________________________
There are a lot of ways to use this information, but the most notable of which is to detect shifts in the market cycle.
For this example, take a look at the dynamic s/r dots in the resistances category (red background). You can see clearly that there are distinctive blocks of high density dots that have clear beginnings and ends. When we transition from a high density of dots to none in resistances, that means we are flipping them as support and entering a bull cycle. On the other hand, when we go from low density of dots as resistances to high density, we're pivoting to a bear cycle. Easy as that, you can quickly detect when market cycles are beginning or ending.
Alternatively, you can add your preferred linear SR's, fibs, etc. to the chart and quickly glance at the dashboard to gauge how dynamic SR's may be contributing to the risk of your trade.
_______________________________________________________________________________________________________________
Who It's For
_______________________________________________________________________________________________________________
New traders: by looking at dot density alone, you can use Dot Dynamics to spot transitionary phases in market cycles.
Experienced traders: keep your charts clean and the information easy to digest.
Developers: I created this originally as a starting point for more complex algos I'm working on. One algo is reading this dot dashboard and taking a position size relative to the s/r's above and below. Another cloud algo is using the results as inputs to spot good setups.
Colored Bars
There is an option (off by default, shown in the headline image above) to fill the bar colors based on how many dynamic s/r's are above or below the current price. This can make things easier for some users, confusing for others. I defaulted them to off as I don't want colors to confuse the primary value proposition of the indicators, which is the dot heat map. You can turn on colored bars in the settings.
One thing to note with the colored bars: they plot the color purely by the dot densities. Random spikes in the gradient colors (i.e. red to lime or green) can be a useful thing to notice, as they commonly occur at places where the price is bouncing between dynamic s/r's and can indicate a paradigm shift in the market cycle.
_______________________________________________________________________________________________________________
Timeframes and Assets
_______________________________________________________________________________________________________________
This can be used effectively on all assets (stocks, crypto, forex, etc) and all time frames. As always with any indicator, the higher TF's are generally respected more than lower TF's.
Thanks for checking it out! I've been trading crypto for years and am just now beginning to publish my ideas, secret-sauce scripts and handy tools (like this one). If you enjoyed this indicator and would like to see more, a like and a follow is greatly appreciated 😁.
McGinley Dynamic (Improved) - John R. McGinley, Jr.For all the McGinley enthusiasts out there, this is my improved version of the "McGinley Dynamic", originally formulated and publicized in 1990 by John R. McGinley, Jr. Prior to this release, I recently had an encounter with a member request regarding the reliability and stability of the general algorithm. Years ago, I attempted to discover the root of it's inconsistency, but success was not possible until now. Being no stranger to a good old fashioned computational crisis, I revisited it with considerable contemplation.
I discovered a lack of constraints in the formulation that either caused the algorithm to implode to near zero and zero OR it could explosively enlarge to near infinite values during unusual price action volatility conditions, occurring on different time frames. A numeric E-notation in a moving average doesn't mean a stock just shot up in excess of a few quintillion in value from just "10ish" moments ago. Anyone experienced with the usual McGinley Dynamic, has probably encountered this with dynamically dramatic surprises in their chart, destroying it's usability.
Well, I believe I have found an answer to this dilemma of 'susceptibility to miscalculation', to provide what is most likely McGinley's whole hearted intention. It required upgrading the formulation with two constraints applied to it using min/max() functions. Let me explain why below.
When using base numbers with an exponent to the power of four, some miniature numbers smaller than one can numerically collapse to near 0 values, or even 0.0 itself. A denominator of zero will always give any computational device a horribly bad day, not to mention the developer. Let this be an EASY lesson in computational division, I often entertainingly express to others. You have heard the terminology "$#|T happens!🙂" right? In the programming realm, "AnyNumber/0.0 CAN happen!🤪" too, and it happens "A LOT" unexpectedly, even when it's highly improbable. On the other hand, numbers a bit larger than 2 with the power of four can tremendously expand rapidly to the numeric limits of 64-bit processing, generating ginormous spikes on a chart.
The ephemeral presence of one OR both of those potentials now has a combined satisfactory remedy, AND you as TV members now have it, endowed with the ever evolving "Power of Pine". Oh yeah, this one plots from bar_index==0 too. It also has experimental settings tweaks to play with, that may reveal untapped potential of this formulation. This function now has gain of function capabilities, NOT to be confused with viral gain of function enhancements from reckless BSL-4 leaking laboratories that need to be eternally abolished from this planet. Although, I do have hopes this imd() function has the potential to go viral. I believe this improved function may have utility in the future by developers of the TradingView community. You have the source, and use it wisely...
I included an generic ema() plot for a basic comparison, ultimately unveiling some of this algorithm's unique characteristics differing on a variety of time frames. Also another unconstrained function is included to display some the disparities of having no limitations on a divisor in the calculation. I strongly advise against the use of umd() in any published script. There is simply just no reason to even ponder using it. I also included notes in the script to warn against this. It's funny now, but some folks don't always read/understand my advisories... You have been warned!
NOTICE: You have absolute freedom to use this source code any way you see fit within your new Pine projects, and that includes TV themselves. You don't have to ask for my permission to reuse this improved function in your published scripts, simply because I have better things to do than answer requests for the reuse of this simplistic imd() function. Sufficient accreditation regarding this script and compliance with "TV's House Rules" regarding code reuse, is as easy as copying the entire function as is. Fair enough? Good! I have a backlog of "computational crises" to contend with, including another one during the writing of this elaborate description.
When available time provides itself, I will consider your inquiries, thoughts, and concepts presented below in the comments section, should you have any questions or comments regarding this indicator. When my indicators achieve more prevalent use by TV members, I may implement more ideas when they present themselves as worthy additions. Have a profitable future everyone!
Many Moving AveragesThis script allows you to add two moving averages to a chart, where the type of moving average can be chosen from a collection of 15 different moving average algorithms. Each moving average can also have different lengths and crossovers/unders can be displayed and alerted on.
The supported moving average types are:
Simple Moving Average ( SMA )
Exponential Moving Average ( EMA )
Double Exponential Moving Average ( DEMA )
Triple Exponential Moving Average ( TEMA )
Weighted Moving Average ( WMA )
Volume Weighted Moving Average ( VWMA )
Smoothed Moving Average ( SMMA )
Hull Moving Average ( HMA )
Least Square Moving Average/Linear Regression ( LSMA )
Arnaud Legoux Moving Average ( ALMA )
Jurik Moving Average ( JMA )
Volatility Adjusted Moving Average ( VAMA )
Fractal Adaptive Moving Average ( FRAMA )
Zero-Lag Exponential Moving Average ( ZLEMA )
Kauman Adaptive Moving Average ( KAMA )
Many of the moving average algorithms were taken from other peoples' scripts. I'd like to thank the authors for making their code available.
JayRogers
Alex Orekhov (everget)
Alex Orekhov (everget)
Joris Duyck (JD)
nemozny
Shizaru
KobySK
Jurik Research and Consulting for inventing the JMA.
BitradertrackerEste Indicador ya no consiste en líneas móviles que se cruzan para dar señales de entrada o salida, si no que va más allá e interpreta gráficamente lo que está sucediendo con el valor.
Es un algoritmo potente, que incluye 4 indicadores de tendencia y 2 indicadores de volumen.
Con este indicador podemos movernos con las "manos fuertes" del mercado, rastrear sus intenciones y tomar decisiones de compra y venta.
Diseñado para operar en criptomonedas.
En cuanto a qué temporalidad usar, cuanto más grande mejor, ya que al final lo que estamos haciendo es el análisis de datos y, por lo tanto, cuanto más datos, mejor. Personalmente recomiendo usarlo en velas de 30 minutos, 1 hora y 4 horas.
Recuerde, ningún indicador es 100% efectivo.
Este indicador nos muestra en las áreas de color púrpura (manos fuertes) y en las áreas de color verde (manos débiles) y al mostrármelo gráficamente ya el indicador vale la pena.
El mercado está impulsado por dos tipos de inversores, que se denominan manos fuertes o ballenas (agencias, fondos, empresas, bancos, etc.) y manos débiles o peces pequeños (es decir, nosotros).
No tenemos la capacidad de manipular un valor, ya que nuestra cartera es limitada, pero podemos ingresar y salir de los valores fácilmente ya que no tenemos mucho dinero.
Las ballenas pueden manipular un valor ya que tienen muchos bitcoins y / o dinero, sin embargo, no pueden moverse fácilmente.
Entonces, ¿como pueden comprar o vender sus monedas las ballenas? Bueno, ellos hacen su juego: Tratan de hacernos creer que la moneda esta barata cuando nos quieren vender sus monedas o hacernos creer que la moneda es cara cuando quieren comprar nuestras monedas. Esta manipulación se realiza de muchas maneras, la mayoría por noticias.
Nosotros, los pequeños peces, no podemos competir contra las ballenas, pero podemos descubrir qué están haciendo (recuerde, son lentas, mueven sus monstruosas cantidades de dinero) debemos movernos con ellas e imitarlas. Mejor estar bajo la ballena que delante de ella.
Con este indicador puedes ver cuando las ballenas están operando y reaccionar ; porque el enfoque matemático que los sustenta ha demostrado ser bastante exitoso.
Cuando las manos fuertes están por debajo de cero, se dice que están comprando. Lo mismo ocurre con las manos débiles. Generalmente, si las manos fuertes están comprando o vendiendo, el precio está lateralizado. El movimiento del precio está asociado con las compras y ventas realizadas por la mano débil.
Espero que les sea de mucha utilidad.
Bitrader4.0
This indicator no longer consists of mobile lines that intersect to give input or output signals, but it goes further and graphically interprets what is happening with the value.
It is a powerful algorithm, which includes 4 trend indicators and 2 volume indicators.
With this indicator we can move with the "strong hands" of the market, track their intentions and make buying and selling decisions.
Designed to operate in cryptocurrencies.
As for what temporality to use, the bigger the better, since in the end what we are doing is the analysis of data and, therefore, the more data, the better. Personally I recommend using it in candles of 30 minutes, 1 hour and 4 hours.
Remember, no indicator is 100% effective.
This indicator shows us in the areas of color purple (strong hands) and in the areas of color green (weak hands) and by showing it graphically and the indicator is worth it.
The market is driven by two types of investors, which are called strong hands or whales (agencies, funds, companies, banks, etc.) and weak hands or small fish (that is, us).
We do not have the ability to manipulate a value, since our portfolio is limited, but we can enter and exit the securities easily since we do not have much money.
Whales can manipulate a value since they have many bitcoins and / or money, however, they can not move easily.
So, how can whales buy or sell their coins? Well, they make their game: They try to make us believe that the currency is cheap when they want to sell their coins or make us believe that the currency is expensive when they want to buy our coins. This manipulation is done in many ways, most by news.
We, small fish, can not compete against whales, but we can find out what they are doing (remember, they are slow, move their monstrous amounts of money) we must move with them and imitate them. Better to be under the whale than in front of her.
With this indicator you can see when the whales are operating and reacting; because the mathematical approach that sustains them has proven to be quite successful.
When strong hands are below zero, they say they are buying. The same goes for weak hands. Generally, if strong hands are buying or selling, the price is lateralized. The movement of the price is associated with the purchases and sales made by the weak hand.
I hope you find it very useful.
Bitrader4.0
Acc/DistAMA with FRACTAL DEVIATION BANDS by @XeL_ArjonaACCUMULATION/DISTRIBUTION ADAPTIVE MOVING AVERAGE with FRACTAL DEVIATION BANDS
Ver. 2.5 @ 16.09.2015
By Ricardo M Arjona @XeL_Arjona
DISCLAIMER:
The Following indicator/code IS NOT intended to be a formal investment advice or recommendation by the
author, nor should be construed as such. Users will be fully responsible by their use regarding their own trading vehicles/assets.
The embedded code and ideas within this work are FREELY AND PUBLICLY available on the Web for NON LUCRATIVE ACTIVITIES and must remain as is.
Pine Script code MOD's and adaptations by @XeL_Arjona with special mention in regard of:
Buy (Bull) and Sell (Bear) "Power Balance Algorithm" by:
Stocks & Commodities V. 21:10 (68-72): "Bull And Bear Balance Indicator by Vadim Gimelfarb"
Fractal Deviation Bands by @XeL_Arjona.
Color Cloud Fill by @ChrisMoody
CHANGE LOG:
Following a "Fractal Approach" now the lookback window is hardcode correlated with a given timeframe. (Default @ 126 days as Half a Year / 252 bars)
Clean and speed up of Adaptive Moving Average Algo.
Fractal Deviation Band Cloud coloring smoothed.
>
ALL NEW IDEAS OR MODIFICATIONS to these indicator(s) are Welcome in favor to deploy a better and more accurate readings. I will be very glad to be notified at Twitter or TradingVew accounts at: @XeL_Arjona
Any important addition to this work MUST REMAIN PUBLIC by means of CreativeCommons CC & TradingView. Copyright 2015
Volume Pressure Composite Average with Bands by @XeL_ArjonaVOLUME PRESSURE COMPOSITE AVERAGE WITH BANDS
Ver. 1.0.beta.10.08.2015
By Ricardo M Arjona @XeL_Arjona
DISCLAIMER:
The Following indicator/code IS NOT intended to be a formal investment advice or recommendation by the author, nor should be construed as such. Users will be fully responsible by their use regarding their own trading vehicles/assets.
The embedded code and ideas within this work are FREELY AND PUBLICLY available on the Web for NON LUCRATIVE ACTIVITIES and must remain as is.
Pine Script code MOD's and adaptations by @XeL_Arjona with special mention in regard of:
Buy (Bull) and Sell (Bear) "Power Balance Algorithm" by :
Stocks & Commodities V. 21:10 (68-72):
"Bull And Bear Balance Indicator by Vadim Gimelfarb"
Adjusted Exponential Adaptation from original Volume Weighted Moving Average (VEMA) by @XeL_Arjona with help given at the @pinescript chat room with special mention to @RicardoSantos
Color Cloud Fill Condition algorithm by @ChrisMoody
WHAT IS THIS?
The following indicators try to acknowledge in a K-I-S-S approach to the eye (Keep-It-Simple-Stupid), the two most important aspects of nearly every trading vehicle: -- PRICE ACTION IN RELATION BY IT'S VOLUME --
A) My approach is to make this indicator both as a "Trend Follower" as well as a Volatility expressed in the Bands which are the weighting basis of the trend given their "Cross Signal" given by the Buy & Sell Volume Pressures algorithm. >
B) Please experiment with lookback periods against different timeframes. Given the nature of the Volume Mathematical Monster this kind of study is and in concordance with Price Action; at first glance I've noted that both in short as in long term periods, the indicator tends to adapt quite well to general price action conditions. BE ADVICED THIS IS EXPERIMENTAL!
C) ALL NEW IDEAS OR MODIFICATIONS to these indicator(s) are Welcome in favor to deploy a better and more accurate readings. I will be very glad to be notified at Twitter or TradingVew accounts at: @XeL_Arjona
Any important addition to this work MUST REMAIN PUBLIC by means of CreativeCommons CC & TradingView. --- All Authorship Rights RESERVED 2015 ---
Cumulative Intraday Volume with Long/Short LabelsThis indicator calculates a running total of volume for each trading day, then shows on the price chart when that total crosses levels you choose. Every day at 6:00 PM Eastern Time, the total goes back to zero so it always reflects only the current day’s activity. From that moment on, each time a new candle appears the indicator looks at whether the candle closed higher than it opened or lower. If it closed higher, the candle’s volume is added to the running total; if it closed lower, the same volume amount is subtracted. As a result, the total becomes positive when buyers have dominated so far today and negative when sellers have dominated.
Because futures markets close at 6 PM ET, the running total resets exactly then, mirroring the way most intraday traders think in terms of a single session. Throughout the day, you will see this running total move up or down according to whether more volume is happening on green or red candles. Once the total goes above a number you specify (for example, one hundred thousand contracts), the indicator will place a small “Long” label at that candle on the main price chart to let you know buying pressure has reached that level. Similarly, once the total goes below a negative number you choose (for example, minus one hundred thousand), a “Short” label will appear at that candle to signal that selling pressure has reached your chosen threshold. You can set these threshold numbers to whatever makes sense for your trading style or the market you follow.
Because raw volume alone never turns negative, this design uses candle direction as a sign. Green candles (where the close is higher than the open) add volume, and red candles (where the close is lower than the open) subtract volume. Summing those signed volume values tells you in a single number whether buying or selling has been stronger so far today. That number resets every evening, so it does not carry over any buying or selling from previous sessions.
Once you have this indicator on your chart, you simply watch the “summed volume” line as it moves throughout the day. If it climbs past your long threshold, you know buyers are firmly in control and a long entry might make sense. If it falls past your short threshold, you know sellers are firmly in control and a short entry might make sense. In quieter markets or times of low volume, you might use a smaller threshold so that even modest buying or selling pressure will trigger a label. During very active periods, a larger threshold will prevent too many signals when volume spikes frequently.
This approach is straightforward but can be surprisingly powerful. It does not rely on complex formulas or hidden statistical measures. Instead, it simply adds and subtracts daily volume based on candle color, then alerts you when that total reaches levels you care about. Over several years of historical testing, this formula has shown an ability to highlight moments when intraday sentiment shifts decisively from buyers to sellers or vice versa. Because the indicator resets every day at 6 PM, it always reflects only today’s sentiment and remains easy to interpret without carrying over past data. You can use it on any intraday timeframe, but it works especially well on five-minute or fifteen-minute charts for futures contracts.
If you want a clear gauge of whether buyers or sellers are dominating in real time, and you prefer a rule-based method rather than a complex model, this indicator gives you exactly that. It shows net buying or selling pressure at a glance, resets each session like most intraday traders do, and marks the moments when that pressure crosses the levels you decide are important. By combining a daily reset with signed volume, you get a single number that tells you precisely what the crowd is doing at any given moment, without any of the guesswork or hidden calculations that more complicated indicators often carry.
[blackcat] L2 Multi-Level Price Condition TrackerOVERVIEW
The L2 Multi-Level Price Condition Tracker represents an innovative approach to analyzing financial markets by simultaneously monitoring multiple price levels, thus providing traders with a holistic view of market dynamics. By combining dynamic calculations based on moving averages and price deviations, this tool aims to deliver precise and actionable insights into potential entry and exit points. It leverages sophisticated statistical measures to identify key thresholds that signify shifts in market sentiment, thereby aiding traders in making well-informed decisions. 🎯
Key benefits encompass:
• Comprehensive calculation of midpoints and average prices indicating short-term trend directions.
• Interactive visualization elements enhancing interpretability effortlessly.
• Real-time generation of buy/sell signals driven by precise condition evaluations.
TECHNICAL ANALYSIS COMPONENTS
📉 Midpoint Calculations:
Computes central reference points derived from high-low ranges establishing baseline supports/resistances.
Utilizes Simple Moving Averages (SMAs) along with standardized deviation formulas smoothing out volatility while preserving long-term trends accurately.
Facilitates identification of directional biases reflecting underlying market forces dynamically.
🕵️♂️ Advanced Price Level Detection:
Derives upper/lower bounds adjusting sensitivities adaptively responding to changing conditions flexibly.
Employs proprietary logic distinguishing between bullish/bearish sentiments promptly signaling transitions effectively.
Ensures consistent adherence to predefined statistical protocols maintaining accuracy robustly.
🎥 Dynamic Signal Generation:
Detects crossovers indicating dominance shifts between buyers/sellers promptly triggering timely alerts.
Integrates conditional logic reinforcing signal validity minimizing erroneous activations systematically.
Supports adaptive thresholds tuning sensitivities based on evolving market conditions flexibly accommodating varying scenarios.
INDICATOR FUNCTIONALITY
🔢 Core Algorithms:
Utilizes moving averages alongside standardized deviation formulas generating precise net volume measurements.
Implements Arithmetic Mean Line Algorithm (AMLA) smoothing techniques improving interpretability.
Ensures consistent alignment with established statistical principles preserving fidelity.
🖱️ User Interface Elements:
Dedicated plots displaying real-time midpoint markers facilitating swift decision-making.
Context-sensitive color coding distinguishing positive/negative deviations intuitively highlighting key activations clearly.
Background shading emphasizing proximity to crucial threshold activations enhancing visibility focusing attention on vital signals promptly.
STRATEGY IMPLEMENTATION
✅ Entry Conditions:
Confirm bullish/bearish setups validated through multiple confirmatory signals assessing concurrent market sentiment factors.
Validate entry decisions considering alignment between calculated midpoints and broader trend directions ensuring coherence.
Monitor cumulative breaches signifying potential trend reversals executing partial/total closes contingent upon predetermined loss limits preserving capital efficiently.
🚫 Exit Mechanisms:
Trigger exits upon hitting predefined thresholds derived from historical analyses promptly executing closures.
Execute partial/total closes contingent upon cumulative loss limits preserving capital efficiently managing exposures prudently.
Conduct periodic reviews gauging strategy effectiveness rigorously identifying areas needing refinement implementing corrective actions iteratively enhancing performance metrics steadily.
PARAMETER CONFIGURATIONS
🎯 Optimization Guidelines:
Lookback Period: Governs responsiveness versus stability balancing sensitivity/stability governing moving averages aligning with preferred granularity.
Price Source: Dictates primary data series driving volume calculations selecting relevant inputs accurately tailoring strategies accordingly.
💬 Customization Recommendations:
Commence with baseline defaults; iteratively refine parameters isolating individual impacts evaluating adjustments independently prior to combined modifications minimizing disruptions.
Prioritize minimizing erroneous trigger occurrences first optimizing signal fidelity sustaining balanced risk-reward profiles irrespective of chosen settings upholding disciplined approaches preserving capital efficiently.
ADVANCED RISK MANAGEMENT
🛡️ Proactive Risk Mitigation Techniques:
Enforce strict compliance with pre-defined maximum leverage constraints adhering strictly to guidelines managing exposures prudently.
Mandatorily apply trailing stop-loss orders conforming to script outputs enforcing discipline rigorously preventing adverse consequences.
Allocate positions proportionately relative to available capital reserves conducting periodic reviews gauging effectiveness continuously identifying improvement opportunities steadily.
⚠️ Potential Pitfalls & Solutions:
Address frequent violations arising during heightened volatility phases necessitating manual interventions judiciously preparing contingency plans proactively mitigating risks effectively.
Manage false alerts warranting immediate attention avoiding adverse consequences systematically implementing corrective actions reliably.
Prepare proactive responses amid adverse movements ensuring seamless functionality amidst fluctuating conditions fortifying resilience against anomalies robustly.
PERFORMANCE MONITORING METRICS
🔍 Evaluation Criteria:
Assess win percentages consistently across diverse trading instruments gauging reliability measuring profitability efficiency accurately evaluating downside risks comprehensively uncovering systematic biases potentially skewing outcomes.
Calculate average profit ratios per successful execution benchmarking actual vs expected performances documenting results meticulously tracking progress dynamically addressing identified shortcomings proactively fostering continuous improvements.
📈 Historical Data Analysis Tools:
Maintain detailed logs capturing every triggered event recording realized profits/losses comparing simulated projections accurately identifying discrepancies warranting investigation implementing iterative refinements steadily enhancing performance metrics progressively.
Identify recurrent systematic errors demanding corrective actions implementing iterative refinements steadily addressing identified shortcomings proactively fostering continuous enhancements dynamically improving robustness resiliently.
PROBLEM SOLVING ADVICE
🔧 Frequent Encountered Challenges:
Unpredictable behaviors emerging within thinly traded markets requiring filtration processes enhancing signal integrity excluding low-liquidity assets prone to erratic movements effectively.
Latency issues manifesting during abrupt price fluctuations causing missed opportunities introducing buffer intervals safeguarding major news/event impacts mitigating distortions seamlessly verifying reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations dependably.
💡 Effective Resolution Pathways:
Limit ongoing optimization attempts preventing model degradation maintaining optimal performance levels consistently recalibrating parameters periodically adapting strategies flexibly responding appropriately amidst varying conditions dynamically improving robustness resiliently.
Verify reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations dependably bolstering overall efficacy systematically addressing identified shortcomings dynamically fostering continuous advancements.
THANKS
Heartfelt acknowledgment extends to all developers contributing invaluable insights regarding multi-level price condition-based trading methodologies! ✨
[blackcat] L2 Z-Score of PriceOVERVIEW
The L2 Z-Score of Price indicator offers traders an insightful perspective into how current prices diverge from their historical norms through advanced statistical measures. By leveraging Z-scores, it provides a robust framework for identifying potential reversals in financial markets. The Z-score quantifies the number of standard deviations that a data point lies away from the mean, thus serving as a critical metric for recognizing overbought or oversold conditions. 🎯
Key benefits encompass:
• Precise calculation of Z-scores reflecting true price deviations.
• Interactive plotting features enhancing visual clarity.
• Real-time generation of buy/sell signals based on crossover events.
STATISTICAL ANALYSIS COMPONENTS
📉 Mean Calculation:
Utilizes Simple Moving Averages (SMAs) to establish baseline price references.
Provides smooth representations filtering short-term noise preserving long-term trends.
Fundamental for deriving subsequent deviation metrics accurately.
📈 Standard Deviation Measurement:
Quantifies dispersion around established means revealing underlying variability.
Crucial for assessing potential volatility levels dynamically adapting strategies accordingly.
Facilitates precise Z-score derivations ensuring statistical rigor.
🕵️♂️ Z-SCORE DETECTION:
Measures standardized distances indicating relative positions within distributions.
Helps pinpoint extreme conditions signaling impending reversals proactively.
Enables early identification of trend exhaustion phases prompting timely actions.
INDICATOR FUNCTIONALITY
🔢 Core Algorithms:
Integrates SMAs along with standardized deviation formulas generating precise Z-scores.
Employs Arithmetic Mean Line Algorithm (AMLA) smoothing techniques improving interpretability.
Ensures consistent adherence to predefined statistical protocols maintaining accuracy.
🖱️ User Interface Elements:
Dedicated plots displaying real-time Z-score markers facilitating swift decision-making.
Context-sensitive color coding distinguishing positive/negative deviations intuitively.
Background shading highlighting proximity to key threshold activations enhancing visibility.
STRATEGY IMPLEMENTATION
✅ Entry Conditions:
Confirm bullish/bearish setups validated through multiple confirmatory signals.
Validate entry decisions considering concurrent market sentiment factors.
Assess alignment between Z-score readings and broader trend directions ensuring coherence.
🚫 Exit Mechanisms:
Trigger exits upon hitting predetermined thresholds derived from historical analyses.
Monitor continuous breaches signifying potential trend reversals promptly executing closures.
Execute partial/total closes contingent upon cumulative loss limits preserving capital efficiently.
PARAMETER CONFIGURATIONS
🎯 Optimization Guidelines:
Length: Governs responsiveness versus smoothing trade-offs balancing sensitivity/stability.
Price Source: Dictates primary data series driving Z-score computations selecting relevant inputs accurately.
💬 Customization Recommendations:
Commence with baseline defaults; iteratively refine parameters isolating individual impacts.
Evaluate adjustments independently prior to combined modifications minimizing disruptions.
Prioritize minimizing erroneous trigger occurrences first optimizing signal fidelity.
Sustain balanced risk-reward profiles irrespective of chosen settings upholding disciplined approaches.
ADVANCED RISK MANAGEMENT
🛡️ Proactive Risk Mitigation Techniques:
Enforce strict compliance with pre-defined maximum leverage constraints adhering strictly to guidelines.
Mandatorily apply trailing stop-loss orders conforming to script outputs reinforcing discipline.
Allocate positions proportionately relative to available capital reserves managing exposures prudently.
Conduct periodic reviews gauging strategy effectiveness rigorously identifying areas needing refinement.
⚠️ Potential Pitfalls & Solutions:
Address frequent violations arising during heightened volatility phases necessitating manual interventions judiciously.
Manage false alerts warranting immediate attention avoiding adverse consequences systematically.
Prepare contingency plans mitigating margin call possibilities preparing proactive responses effectively.
Continuously assess automated system reliability amidst fluctuating conditions ensuring seamless functionality.
PERFORMANCE AUDITS & REFINEMENTS
🔍 Critical Evaluation Metrics:
Assess win percentages consistently across diverse trading instruments gauging reliability.
Calculate average profit ratios per successful execution measuring profitability efficiency accurately.
Measure peak drawdown durations alongside associated magnitudes evaluating downside risks comprehensively.
Analyze signal generation frequencies revealing hidden patterns potentially skewing outcomes uncovering systematic biases.
📈 Historical Data Analysis Tools:
Maintain comprehensive records capturing every triggered event meticulously documenting results.
Compare realized profits/losses against backtested simulations benchmarking actual vs expected performances accurately.
Identify recurrent systematic errors demanding corrective actions implementing iterative refinements steadily.
Document evolving performance metrics tracking progress dynamically addressing identified shortcomings proactively.
PROBLEM SOLVING ADVICE
🔧 Frequent Encountered Challenges:
Unpredictable behaviors emerging within thinly traded markets requiring filtration processes.
Latency issues manifesting during abrupt price fluctuations causing missed opportunities.
Overfitted models yielding suboptimal results post-extensive tuning demanding recalibrations.
Inaccuracies stemming from incomplete/inaccurate data feeds necessitating verification procedures.
💡 Effective Resolution Pathways:
Exclude low-liquidity assets prone to erratic movements enhancing signal integrity.
Introduce buffer intervals safeguarding major news/event impacts mitigating distortions effectively.
Limit ongoing optimization attempts preventing model degradation maintaining optimal performance levels consistently.
Verify reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations reliably.
USER ENGAGEMENT SEGMENT
🤝 Community Contributions Welcome
Highly encourage active participation sharing experiences & recommendations!
Liquidity Sweep DetectorThe Liquidity Sweep Detector represents a technical analysis tool specifically designed to identify market microstructure patterns typically associated with institutional trading activity. According to Harris (2003), institutional traders frequently employ tactics where they momentarily break through price levels to trigger stop orders before redirecting the market in the opposite direction. This phenomenon, commonly referred to as "stop hunting" or "liquidity sweeping," constitutes a significant aspect of institutional order flow analysis (Osler, 2003). The current implementation provides retail traders with a means to identify these patterns, potentially aligning their trading decisions with institutional movements rather than becoming victims of such strategies.
Osler's (2003) research documents how stop-loss orders tend to cluster around significant price levels, creating concentrations of liquidity. Taylor (2005) argues that sophisticated institutional participants systematically exploit these liquidity clusters by inducing price movements that trigger these orders, subsequently profiting from the ensuing price reaction. The algorithmic detection of such patterns involves several key processes. First, the indicator identifies swing points—local maxima and minima—through comparison with historical price data within a definable lookback period. These swing points correspond to what Bulkowski (2011) describes as "significant pivot points" that frequently serve as liquidity zones where stop orders accumulate.
The core detection algorithm utilizes a multi-stage process to identify potential sweeps. For high sweeps, it monitors when price exceeds a previous swing high by a specified threshold percentage, followed by a bearish candle that closes below the original swing high level. Conversely, for low sweeps, it detects when price drops below a previous swing low by the threshold percentage, followed by a bullish candle closing above the original swing low. As noted by Lo and MacKinlay (2011), these price patterns often emerge when large institutional players attempt to capture liquidity before initiating significant directional moves.
The indicator maintains historical arrays of detected sweep events with their corresponding timestamps, enabling temporal analysis of market behavior following such events. Visual elements include horizontal lines marking sweep levels, background color highlighting for sweep events, and an information table displaying active sweeps with their corresponding price levels and elapsed time since detection. This visualization approach allows traders to quickly identify potential institutional activity without requiring complex interpretation of raw price data.
Parameter customization includes adjustable lookback periods for swing point identification, sweep threshold percentages for signal sensitivity, and display duration settings. These parameters allow traders to adapt the indicator to various market conditions and timeframes, as markets demonstrate different liquidity characteristics across instruments and periods (Madhavan, 2000).
Empirical studies by Easley et al. (2012) suggest that retail traders who successfully identify and act upon institutional liquidity sweeps may achieve superior risk-adjusted returns compared to conventional technical analysis approaches. However, as cautioned by Chordia et al. (2008), such patterns should be considered within broader market context rather than in isolation, as their predictive value varies significantly with overall market volatility and liquidity conditions.
References:
Bulkowski, T. (2011). Encyclopedia of Chart Patterns (2nd ed.). John Wiley & Sons.
Chordia, T., Roll, R., & Subrahmanyam, A. (2008). Liquidity and market efficiency. Journal of Financial Economics, 87(2), 249-268.
Easley, D., López de Prado, M., & O'Hara, M. (2012). Flow Toxicity and Liquidity in a High-frequency World. The Review of Financial Studies, 25(5), 1457-1493.
Harris, L. (2003). Trading and Exchanges: Market Microstructure for Practitioners. Oxford University Press.
Lo, A. W., & MacKinlay, A. C. (2011). A Non-Random Walk Down Wall Street. Princeton University Press.
Madhavan, A. (2000). Market microstructure: A survey. Journal of Financial Markets, 3(3), 205-258.
Osler, C. L. (2003). Currency Orders and Exchange Rate Dynamics: An Explanation for the Predictive Success of Technical Analysis. Journal of Finance, 58(5), 1791-1820.
Taylor, M. P. (2005). Official Foreign Exchange Intervention as a Coordinating Signal in the Dollar-Yen Market. Pacific Economic Review, 10(1), 73-82.
[blackcat] L2 Trend Guard OscillatorOVERVIEW
📊 The L2 Trend Guard Oscillator is a comprehensive technical analysis framework designed specifically to identify market trend reversals using adaptive filtering algorithms that combine price action dynamics with statistical measures of volatility and momentum.
Key Purpose:
Generate reliable early warning signals before major trend changes occur
Provide clear directional bias indicators aligned with institutional investor behavior patterns
Offer risk-managed entry/exit opportunities suitable for various timeframes
TECHNICAL FOUNDATION EXPLAINED
🎓 Core Mechanism Breakdown:
→ Advanced smoothing technique emphasizing recent data points more heavily than older ones
↓ Reduces lag while maintaining signal integrity compared to traditional MA approaches
• Short-term Momentum Assessment:
🔶 Relative strength between closing prices vs lower bounds
• Long-term Directional Bias Analysis:
📈 Extended timeframe comparison generating structural context
• Defense Level Generation:
➜ Protective boundary calculation incorporating EMAs for stability enhancement
PARAMETER CONFIGURATION GUIDE
🔧 Adjustable Settings Explained In Detail:
Timeframe Selection:**
↔ Controls lookback period sensitivity affecting responsiveness
↕ Adjusts reaction speed vs accuracy trade-off dynamically
Weight Factor Specification:**
⚡ Influences emphasis on newer versus historical observations
🎯 Defines key decision-making thresholds clearly
ALGORITHM EXECUTION FLOW
💻 Processing Sequence Overview:
:
→ Gather raw pricing inputs across required periods
↓ Normalize values preparing them for subsequent processing stages
:
✔ Calculate relative strength positions against established ranges
❌ Filter outliers maintaining signal integrity consistently
⟶ Apply dual-pass filtering reducing false signals effectively
➡ Generate actionable trading opportunities systematically
VISUALIZATION ARCHITECTURE
🎨 Display Elements Designated Purpose:
🔵 Primary Indicator Traces:
→ Aqua Trace: Buy/Sell Signal Progression
↑ Red Line: Opposing Force Boundary
🟥 Gray Dashed: Zero Reference Point
🏷️ Label System For Critical Events:
✅ BUY: Bullish Opportunity Markers
❌ SELL: Bearish Setup Validations
STRATEGIC IMPLEMENTATION FRAMEWORK
📋 Practical Deployment Steps:
Initial Integration Protocol:
• Select appropriate timeframe matching strategy objectives
• Configure input parameters aligning with target asset behavior traits
• Conduct thorough backtesting under simulated environments initially
Active Monitoring Procedures:
→ Regular observation of labeled event placements versus actual movements
↓ Track confirmation patterns leading up to signaled opportunities carefully
↑ Evaluate overall framework reliability across different regime types regularly
Execution Guidelines Formulation:
✔ Enter positions only after achieving minimum number of confirming inputs
❌ Avoid isolated occurrences lacking adequate supporting evidence always
➞ Look for convergent factors strengthening conviction before acting decisively
PERFORMANCE OPTIMIZATION TECHNIQUES
🚀 Continuous Improvement Strategies:
Parameter Calibration Approach:
✓ Start testing default suggested configurations thoroughly
↕ Gradually adjust individual components observing outcome changes methodically
✨ Document findings building personalized version profile incrementally
Context Adaptability Methods:
🔄 Add supplementary indicators enhancing overall reliability when needed
🔧 Remove unnecessary complexity layers avoiding confusion/distracted decisions
💫 Incorporate custom rules adapting specific security behaviors effectively
Efficiency Improvement Tactics:
⚙️ Streamline redundant computational routines wherever possible efficiently
♻️ Leverage shared data streams minimizing resource utilization significantly
⏳ Optimize refresh frequencies balancing update speed vs overhead properly