3D BowlingIntroducing the "3D Bowling Game" – a fun and interactive demo scene project in Pine Script, powered by a custom 3D engine! This bowling game showcases the potential of Pine Script for developing engaging and immersive experiences, even within the confines of a trading platform.
To play the game, you'll first be prompted to choose where you want to throw the ball. Next, you'll be asked to draw a line indicating the direction you want the ball to go. Sit back and enjoy as the game takes care of the rest!
The source code features various sections, including:
Types and helper functions to manipulate vectors, matrices, and angles
Routines for calculating cross products, dot products, and vector normalization
Transformation matrices for rotation and scaling
Functions for perspective transformation, mesh transformations, and face normal calculations
Culling and shading algorithms to provide a more realistic visual experience
The project's source code is an excellent starting point for anyone interested in exploring the capabilities of Pine Script beyond the typical trading indicators and strategies. The 3D Bowling Game demonstrates the flexibility of Pine Script and its potential for creating interactive experiences in a seemingly unconventional environment.
So, what are you waiting for? Dive into the source code, tweak it to your liking, or build upon it to create your own interactive 3D experiences. Enjoy the game, and happy coding!
With light. I will say there is an issue with the fact that you cant draw as may linefills as you can lines.
Cari dalam skrip untuk "algo"
Sniffer
╭━━━╮╱╱╱╱╭━╮╭━╮
┃╭━╮┃╱╱╱╱┃╭╯┃╭╯
┃╰━━┳━╮╭┳╯╰┳╯╰┳━━┳━╮
╰━━╮┃╭╮╋╋╮╭┻╮╭┫┃━┫╭╯
┃╰━╯┃┃┃┃┃┃┃╱┃┃┃┃━┫┃
╰━━━┻╯╰┻╯╰╯╱╰╯╰━━┻╯
Overview
A vast majority of modern data analysis & modelling techniques rely upon the idea of hidden patterns, wether it is some type of visualisation tool or some form of a complex machine learning algorithm, the one thing that they have in common is the belief, that patterns tell us what’s hidden behind plain numbers. The same philosophy has been adopted by many traders & investors worldwide, there’s an entire school of thought that operates purely based on chart patterns. This is where Sniffer comes in, it is a tool designed to simplify & quantify the job of pattern recognition on any given price chart, by combining various factors & techniques that generate high-quality results.
This tool analyses bars selected by the user, and highlights bar clusters on the chart that exhibit similar behaviour across multiple dimensions. It can detect a single candle pattern like hammers or dojis, or it can handle multiple candles like morning/evening stars or double tops/bottoms, and many more. In fact, the tool is completely independent of such specific candle formations, instead, it works on the idea of vector similarity and generates a degree of similarity for every single combination of candles. Only the top-n matches are highlighted, users get to choose which patterns they want to analyse and to what degree, by customising the feature-space.
Background
In the world of trading, a common use-case is to scan a price chart for some specific candlestick formations & price structures, and then the chart is further analysed in reference to these events. Traders are often trying to answer questions like, when was the last time price showed similar behaviour, what are the instances similar to what price is doing right now, what happens when price forms a pattern like this, what were some of other indicators doing when this happened last(RSI, CCI, ADX etc), and many other abstract ideas to have a stronger confluence or to confirm a bias.Having such a context can be vital in making better informed decisions, but doing this manually on a chart that has thousands of candles can have many disadvantages. It’s tedious, human errors are rather likely, and even if it’s done with pin-point accuracy, chances are that we’ll miss out on many pieces of information. This is the thought that gave birth to Sniffer .
Sniffer tries to provide a general solution for pattern-based analysis by deploying vector-similarity computation techniques, that cover the full-breadth of a price chart and generate a list of top-n matches based on the criteria selected by the user. Most of these techniques come from the data science space, where vector similarity is often implemented to solve classification & clustering problems. Sniffer uses same principles of vector comparison, and computes a degree of similarity for every single candle formation within the selected range, and as a result generates a similarity matrix that captures how similar or dissimilar a set of candles is to the input set selected by the user.
How It Works
A brief overview of how the tool is implemented:
- Every bar is processed, and a set of features are mapped to it.
- Bars selected by the user are captured, and saved for later use.
- Once the all the bars have been processed, candles are back-tracked and degree of similarity is computed for every single bar(max-limit is 5000 bars).
- Degree of similarity is computed by comparing attributes like price range, candle breadth & volume etc.
- Similarity matrix is sorted and top-n results are highlighted on the chart through boxes of different colors.
A brief overview of the features space for bars:
- Range: Difference between high & low
- Body: Difference between close & open
- Volume: Traded volume for that candle
- Head: Upper wick for green candles & lower wick for red candles
- Tail: Lower wick for green candles & upper wick for red candles
- BTR: Body to Range ratio
- HTR: Head to Range ratio
- TTR: Tail to Range ratio
- HTB: Head to Body ratio
- TTB: Tail to Body ratio
- ROC: Rate of change for HL2 for four different periods
- RSI: Relative Strength Index
- CCI: Commodity Channel Index
- Stochastic: Stochastic Index
- ADX: DMI+, DMI- & ADX
A brief overview of how degree of similarity is calculated:
- Each bar set is compared to the inout bar set within the selected feature space
- Features are represented as vectors, and distance between the vectors is calculated
- Shorter the distance, greater the similarity
- Different distance calculation methods are available to choose from, such as Cosine, Euclidean, Lorentzian, Manhattan, & Pearson
- Each method is likely to generate slightly different results, users are expected to select the method & the feature space that best fits their use-case
How To Use It
- Usage of this tool is relatively straightforward, users can add this indicator to their chart and similar clusters will be highlighted automatically
- Users need to select a time range that will be treated as input, and bars within that range become the input formation for similarity calculations
- Boxes will be draw around the clusters that fit the matching criteria
- Boxes are color-coded, green color boxes represent the top one-third of the top-n matches, yellow boxes represent the middle third, red boxes are for bottom third, and white box represents user-input
- Boxes colors will be adjusted as you adjust input parameters, such as number of matches or look-back period
User Settings
Users can configure the following options:
- Select the time-range to set input bars
- Select the look-back period, number of candles to backtrack for similarity search
- Select the number of top-n matches to show on the chart
- Select the method for similarity calculation
- Adjust the feature space, this enables addition of custom features, such as pattern recognition, technical indicators, rate of change etc
- Toggle verbosity, shows degree of similarity as a percentage value inside the box
Top Features
- Pattern Agnostic: Designed to work with variable number of candles & complex patterns
- Customisable Feature Space: Users get to add custom features to each bar
- Comprehensive Comparison: Generates a degree of similarity for all possible combinations
Final Note
- Similarity matches will be shown only within last 4500 bars.
- In theory, it is possible to compute similarity for any size candle formations, indicator has been tested with formations of 50+ candles, but it is recommended to select smaller range for faster & cleaner results.
- As you move to smaller time frames, selected time range will provide a larger number of candles as input, which can produce undesired results, it is advised to adjust your selection when you change time frames. Seeking suggestions on how to directly receive bars as user input, instead of time range.
- At times, users may see array index out of bound error when setting up this indicator, this generally happens when the input range is not properly configured. So, it should disappear after you select the input range, still trying to figure out where it is coming from, suggestions are welcome.
Credits
- @HeWhoMustNotBeNamed for publishing such a handy PineScript Logger, it certainly made the job a lot easier.
Lex_3CR_Functions_Library2Library "Lex_3CR_Functions_Library2"
This is a source code for a technical analysis library in Pine Script language,
designed to identify and mark Bullish and Bearish Three Candle Reversal (3CR) chart patterns.
The library provides three functions to be used in a trading algorithm.
The first function, Bull_3crMarker, adds a dashed line and label to a Bullish 3CR chart pattern, indicating the 3CR point.
The second function, Bear_3crMarker, adds a dashed line and label to a Bearish 3CR chart pattern.
The third function, Bull_3CRlogicals, checks for a Bullish 3CR pattern where the first candle's low is greater than the second candle's low and the second candle's low is less than the third candle's low.
If found, creates a line at the breakout point and a label at the fail point,
if specified. All functions take parameters such as the chart pattern's characteristics and output colors, labels, and markers.
Bull_3crMarker(bulllinearray, barnum, breakpoint, failpointB, failpoint, linecolorbull, bulllabelarray, labelcolor, textcolor, labelon)
Bull_3crMarker Adds a 3CR marker to a Bullish 3CR chart pattern
@description Adds a dashed line and label to a 3CR up chart pattern, indicating the 3CR (3 Candle Reversal) point.
Parameters:
bulllinearray (line )
barnum (int)
breakpoint (float)
failpointB (float )
failpoint (float)
linecolorbull (color)
bulllabelarray (label )
labelcolor (color)
textcolor (color)
labelon (bool)
Bear_3crMarker(bearlinearray, barnum, breakpoint, failpointB, failpoint, linecolorbear, bearlabelarray, labelcolor, textcolor, labelon)
Bear_3crMarker Adds a 3CR marker to a Bearish 3CR chart pattern
@description Adds a dashed line and label to a 3CR down chart pattern, indicating the 3CR (3 Candle Reversal) point.
Parameters:
bearlinearray (line )
barnum (int)
breakpoint (float)
failpointB (float )
failpoint (float)
linecolorbear (color)
bearlabelarray (label )
labelcolor (color)
textcolor (color)
labelon (bool)
Bull_3CRlogicals(low1, low2, low3, bulllinearray, bulllabelarray, failpointB, linecolorbull, labelcolor, textcolor, labelon)
Checks for a bullish three candle reversal pattern and creates a line and label at the breakout point if found
@description Checks for a bullish three candle reversal pattern where the first candle's low is greater than the second candle's low and the second candle's low is less than the third candle's low. If found, creates a line at the breakout point and a label at the fail point, if specified.
Parameters:
low1 (float)
low2 (float)
low3 (float)
bulllinearray (line )
bulllabelarray (label )
failpointB (float )
linecolorbull (color)
labelcolor (color)
textcolor (color)
labelon (bool)
Bear_3CRlogicals(high1, high2, high3, bearlinearray, bearlabelarray, failpointB, linecolorbear, labelcolor, textcolor, labelon)
Checks for a Bearish 3CR pattern and draws a bearish marker on the chart at the appropriate location
@description This function checks for a Bearish 3CR (Three-Candle Reversal) pattern, which is defined as the second candle having a higher high than the first and third candles, and the third candle having a lower high than the first candle. If the pattern is detected, a bearish marker is drawn on the chart at the appropriate location, and an optional label can be added to the marker.
Parameters:
high1 (float)
high2 (float)
high3 (float)
bearlinearray (line )
bearlabelarray (label )
failpointB (float )
linecolorbear (color)
labelcolor (color)
textcolor (color)
labelon (bool)
bullLineDelete(i, bulllinearray, failarray, bulllabelarray, labelon)
Removes a bullish line from a specified position in a line array, and optionally removes a label associated with that line
@description Removes a bullish line from a specified position in a line array, and optionally removes a label associated with that line.
Parameters:
i (int)
bulllinearray (line )
failarray (float )
bulllabelarray (label )
labelon (bool)
bearLineDelete(i, bearlinearray, failarray, bearlabelarray, labelon)
Removes a bearish line from a specified position in a line array, and optionally removes a label associated with that line
@description Removes a bearish line from a specified position in a line array, and optionally removes a label associated with that line.
Parameters:
i (int)
bearlinearray (line )
failarray (float )
bearlabelarray (label )
labelon (bool)
bulloffsetdelete(i, bulllinearray, failarray, bulllabelarray, labelon)
Removes a bullish line from a specified position in a line array, and optionally removes a label associated with that line
@description Removes a bullish line from a specified position in a line array, and optionally removes a label associated with that line.
Parameters:
i (int)
bulllinearray (line )
failarray (float )
bulllabelarray (label )
labelon (bool)
bearoffsetdelete(i, bearlinearray, failarray, bearlabelarray, labelon)
Removes a bearish line from a specified position in a line array, and optionally removes a label associated with that line
@description Removes a bearish line from a specified position in a line array, and optionally removes a label associated with that line.
Parameters:
i (int)
bearlinearray (line )
failarray (float )
bearlabelarray (label )
labelon (bool)
BullEntry_setter(i, bulllinearray, failpointB, entrystopB, entryB, entryboolB)
Checks if the specified value is greater than the break point of any bullish line in an array, and removes that line if true
@description Checks if the s pecified value is greater than the break point of any bullish line in an array, and removes that line if true.
Parameters:
i (int)
bulllinearray (line )
failpointB (float )
entrystopB (float )
entryB (float )
entryboolB (bool )
Bull3CRchecker(close1, bulllinearray, FailpointB, rsiB, bulllabelarray, labelt, bullcolored, directionarray, rsi, secondbullline, entrystopB, entryB, entryboolB)
Parameters:
close1 (float)
bulllinearray (line )
FailpointB (float )
rsiB (float )
bulllabelarray (label )
labelt (bool)
bullcolored (color)
directionarray (label )
rsi (float)
secondbullline (line )
entrystopB (float )
entryB (float )
entryboolB (bool )
Bear3CRchecker(close1, bearlinearray, FailpointB, bearlabelarray, labelt, bearcolored, directionarray, rsi, secondbearline, rsiB)
Checks if the specified value is less than the break point of any bearish line in an array, and removes that line if true
@description Checks if the specified value is less than the break point of any bearish line in an array, and removes that line if true.
Parameters:
close1 (float)
bearlinearray (line )
FailpointB (float )
bearlabelarray (label )
labelt (bool)
bearcolored (color)
directionarray (label )
rsi (float)
secondbearline (line )
rsiB (float )
Bulloffsetcheck(FailpointB, bulllabelarray, linearray, labelt, offset)
Checks the offset of bullish lines and deletes them if they are beyond a certain offset from the current bar index
@description Checks the offset of bullish lines and deletes them if they are beyond a certain offset from the current bar index
Parameters:
FailpointB (float )
bulllabelarray (label )
linearray (line )
labelt (bool)
offset (int)
Bearoffsetcheck(FailpointB, bearlabelarray, linearray, labelt, offset)
Checks the offset of bearish lines and deletes them if they are beyond a certain offset from the current bar index
@description Checks the offset of bearish lines and deletes them if they are beyond a certain offset from the current bar index
Parameters:
FailpointB (float )
bearlabelarray (label )
linearray (line )
labelt (bool)
offset (int)
Bullfailchecker(close1, FailpointB, bulllabelarray, linearray, labelt)
Checks if the current price has crossed above a bullish fail point and deletes the corresponding line and label
@description Checks if the current price has crossed above a bullish fail point and deletes the corresponding line and label
Parameters:
close1 (float)
FailpointB (float )
bulllabelarray (label )
linearray (line )
labelt (bool)
Bearfailchecker(close1, FailpointB, bearlabelarray, linearray, labelt)
Checks for bearish lines that have failed to trigger and removes them from the chart
@description This function checks for bearish lines that have failed to trigger (i.e., where the current price is above the fail point) and removes them from the chart along with any associated label.
Parameters:
close1 (float)
FailpointB (float )
bearlabelarray (label )
linearray (line )
labelt (bool)
rsibullchecker(rsiinput, rsiBull, secondbullline)
Checks for bullish RSI lines that have failed to trigger and removes them from the chart
@description This function checks for bullish RSI lines that have failed to trigger (i.e., where the current RSI value is below the line's trigger level) and removes them from the chart along with any associated line.
Parameters:
rsiinput (float)
rsiBull (float )
secondbullline (line )
rsibearchecker(rsiinput, rsiBear, secondbearline)
Checks for bearish RSI lines that have failed to trigger and removes them from the chart
@description This function checks for bearish RSI lines that have failed to trigger (i.e., where the current RSI value is above the line's trigger level) and removes them from the chart along with any associated line.
Parameters:
rsiinput (float)
rsiBear (float )
secondbearline (line )
FunctionProbabilityViterbiLibrary "FunctionProbabilityViterbi"
The Viterbi Algorithm calculates the most likely sequence of hidden states *(called Viterbi path)*
that results in a sequence of observed events.
viterbi(observations, transitions, emissions, initial_distribution)
Calculate most probable path in a Markov model.
Parameters:
observations (int ) : array . Observation states data.
transitions (matrix) : matrix . Transition probability table, (HxH, H:Hidden states).
emissions (matrix) : matrix . Emission probability table, (OxH, O:Observed states).
initial_distribution (float ) : array . Initial probability distribution for the hidden states.
Returns: array. Most probable path.
ICT Macros by CryptoforICT Macros by Cryptofor
Time periods in which the price is most volatile. At this time, the algorithm is programmed to attack liquidity or fill a significant FVG from which the OF can continue.
Plots of macros:
1. London Macros:
02:33 - 03:00
04:03 - 04:30
2. New York AM Macros:
08:50 - 09:10
09:50 - 10:10
10:50 - 11:10
3. New York Lunch + PM Macros:
11:50 - 12:10
13:10 - 13:40
15:15 - 15:45
Features:
Flexible line settings
Flexible text settings
Display data for all time or for the last 24 hours
Switch for each type of macro
Macro background color settings
Advanced VWAP_Pullback Strategy_Trend-Template QualifierGeneral Description and Unique Features of this Script
Introducing the Advanced VWAP Momentum-Pullback Strategy (long-only) that offers several unique features:
1. Our script/strategy utilizes Mark Minervini's Trend-Template as a qualifier for identifying stocks and other financial securities in confirmed uptrends. Mark Minervini, a 2x US Investment Champion, developed the Trend-Template, which covers eight different and independent characteristics that can be adjusted and optimized in this trend-following strategy to ensure the best results. The strategy will only trigger buy-signals in case the optimized qualifiers are being met.
2. Our strategy is based on the supply/demand balance in the market, making it timeless and effective across all timeframes. Whether you are day trading using 1- or 5-min charts or swing-trading using daily charts, this strategy can be applied and works very well.
3. We have also integrated technical indicators such as the RSI and the MA / VWAP crossover into this strategy to identify low-risk pullback entries in the context of confirmed uptrends. By doing so, the risk profile of this strategy and drawdowns are being reduced to an absolute minimum.
Minervini’s Trend-Template and the ‘Stage-Analysis’ of the Markets
This strategy is a so-called 'long-only' strategy. This means that we only take long positions, short positions are not considered.
The best market environment for such strategies are periods of stable upward trends in the so-called stage 2 - uptrend.
In stable upward trends, we increase our market exposure and risk.
In sideways markets and downward trends or bear markets, we reduce our exposure very quickly or go 100% to cash and wait for the markets to recover and improve. This allows us to avoid major losses and drawdowns.
This simple rule gives us a significant advantage over most undisciplined traders and amateurs!
'The Trend is your Friend'. This is a very old but true quote.
What's behind it???
• 98% of stocks made their biggest gains in a Phase 2 upward trend.
• If a stock is in a stable uptrend, this is evidence that larger institutions are buying the stock sustainably.
• By focusing on stocks that are in a stable uptrend, the chances of profit are significantly increased.
• In a stable uptrend, investors know exactly what to expect from further price developments. This makes it possible to locate low-risk entry points.
The goal is not to buy at the lowest price – the goal is to buy at the right price!
Each stock goes through the same maturity cycle – it starts at stage 1 and ends at stage 4
Stage 1 – Neglect Phase – Consolidation
Stage 2 – Progressive Phase – Accumulation
Stage 3 – Topping Phase – Distribution
Stage 4 – Downtrend – Capitulation
This strategy focuses on identifying stocks in confirmed stage 2 uptrends. This in itself gives us an advantage over long-term investors and less professional traders.
By focusing on stocks in a stage 2 uptrend, we avoid losses in downtrends (stage 4) or less profitable consolidation phases (stages 1 and 3). We are fully invested and put our money to work for us, and we are fully invested when stocks are in their stage 2 uptrends.
But how can we use technical chart analysis to find stocks that are in a stable stage 2 uptrend?
Mark Minervini has developed the so-called 'trend template' for this purpose. This is an essential part of our JS-TechTrading pullback strategy. For our watchlists, only those individual values that meet the tough requirements of Minervini's trend template are eligible.
The Trend Template
• 200d MA increasing over a period of at least 1 month, better 4-5 months or longer
• 150d MA above 200d MA
• 50d MA above 150d MA and 200d MA
• Course above 50d MA, 150d MA and 200d MA
• Ideally, the 50d MA is increasing over at least 1 month
• Price at least 25% above the 52w low
• Price within 25% of 52w high
• High relative strength according to IBD.
NOTE: In this basic version of the script, the Trend-Template has to be used as a separate indicator on TradingView (Public Trend-Template indicators are available in TradingView – community scripts). It is recommended to only execute buy signals in case the stock or financial security is in a stage 2 uptrend, which means that the criteria of the trend-template are fulfilled.
This strategy can be applied to all timeframes from 5 min to daily.
The VWAP Momentum-Pullback Strategy
For the JS-TechTrading VWAP Momentum-Pullback Strategy, only stocks and other financial instruments that meet the selected criteria of Mark Minervini's trend template are recommended for algorithmic trading with this startegy.
A further prerequisite for generating a buy signals is that the individual value is in a short-term oversold state (RSI).
When the selling pressure is over and the continuation of the uptrend can be confirmed by the MA / VWAP crossover after reaching a price low, a buy signal is issued by this strategy.
Stop-loss limits and profit targets can be set variably. You also have the option to make use of the trailing stop exit strategy.
Relative Strength Index (RSI)
The Relative Strength Index (RSI) is a technical indicator developed by Welles Wilder in 1978. The RSI is used to perform a market value analysis and identify the strength of a trend as well as overbought and oversold conditions. The indicator is calculated on a scale from 0 to 100 and shows how much an asset has risen or fallen relative to its own price in recent periods.
The RSI is calculated as the ratio of average profits to average losses over a certain period of time. A high value of the RSI indicates an overbought situation, while a low value indicates an oversold situation. Typically, a value > 70 is considered an overbought threshold and a value < 30 is considered an oversold threshold. A value above 70 signals that a single value may be overvalued and a decrease in price is likely , while a value below 30 signals that a single value may be undervalued and an increase in price is likely.
For example, let's say you're watching a stock XYZ. After a prolonged falling movement, the RSI value of this stock has fallen to 26. This means that the stock is oversold and that it is time for a potential recovery. Therefore, a trader might decide to buy this stock in the hope that it will rise again soon.
The MA / VWAP Crossover Trading Strategy
This strategy combines two popular technical indicators: the Moving Average (MA) and the Volume Weighted Average Price (VWAP). The MA VWAP crossover strategy is used to identify potential trend reversals and entry/exit points in the market.
The VWAP is calculated by taking the average price of an asset for a given period, weighted by the volume traded at each price level. The MA, on the other hand, is calculated by taking the average price of an asset over a specified number of periods. When the MA crosses above the VWAP, it suggests that buying pressure is increasing, and it may be a good time to enter a long position. When the MA crosses below the VWAP, it suggests that selling pressure is increasing, and it may be a good time to exit a long position or enter a short position.
Traders typically use the MA VWAP crossover strategy in conjunction with other technical indicators and fundamental analysis to make more informed trading decisions. As with any trading strategy, it is important to carefully consider the risks and potential rewards before making any trades.
This strategy is applicable to all timeframes and the relevant parameters for the underlying indicators (RSI and MA/VWAP) can be adjusted and optimized as needed.
Backtesting
Backtesting gives outstanding results on all timeframes and drawdowns can be reduced to a minimum level. In this example, the hourly chart for MCFT has been used.
Settings for backtesting are:
- Period from Jan 2020 until March 2023
- Starting capital 100k USD
- Position size = 25% of equity
- 0.01% commission = USD 2.50.- per Trade
- Slippage = 2 ticks
Other comments
- This strategy has been designed to identify the most promising, highest probability entries and trades for each stock or other financial security.
- The combination of the Trend-Template and the RSI qualifiers results in a highly selective strategy which only considers the most promising swing-trading entries. As a result, you will normally only find a low number of trades for each stock or other financial security per year in case you apply this strategy for the daily charts. Shorter timeframes will result in a higher number of trades / year.
- Consequently, traders need to apply this strategy for a full watchlist rather than just one financial security.
Сoncentrated Market Maker Strategy by oxowlConcentrated Market Maker Strategy by oxowl. This script plots an upper and lower bound for liquidity provision, and checks for rebalancing conditions. It also includes alert conditions for when the price crosses the upper or lower bounds.
Here's an overview of the script:
It defines the input parameters: liquidity range percentage, rebalance frequency in minutes, and minimum trade size in assets.
It calculates the upper and lower bounds for liquidity provision based on the liquidity range percentage.
It initializes variables for the last rebalance time and price.
It defines a rebalance condition based on the frequency and current price within the specified range.
If the rebalance condition is met, it updates the last rebalance time and price.
It plots the upper and lower bounds on the chart as lines and adds price labels for both bounds.
It defines alert conditions for when the price crosses the upper or lower bounds.
Finally, it creates alert conditions with appropriate messages for when the price crosses the upper or lower bounds.
Concentrated liquidity is a concept often used in decentralized finance (DeFi) market-making strategies. It allows liquidity providers (LPs) to focus their liquidity within a specific price range, rather than across the entire price curve. Using an indicator with concentrated liquidity can offer several advantages:
Increased capital efficiency: Concentrated liquidity allows LPs to allocate their capital within a narrower price range. This means that the same amount of capital can generate more significant price impact and potentially higher returns compared to providing liquidity across a broader range.
Customized risk exposure: LPs can choose the price range they feel most comfortable with, allowing them to better manage their risk exposure. By selecting a range based on their market outlook, they can optimize their positions to maximize potential returns.
Adaptive strategies: Indicators that support concentrated liquidity can help traders adapt their strategies based on market conditions. For example, they can choose to provide liquidity around a stable price range during low-volatility periods or adjust their range when market conditions change.
To continue integrating this script into your trading strategy, follow these steps:
Import the script into your TradingView account. Navigate to the Pine editor, paste the code, and save it as a new script.
Apply the indicator to a trading pair chart. You can customize the input parameters (liquidity range percentage, rebalance frequency, and minimum trade size) based on your preferences and risk tolerance.
Set alerts for when the price crosses the upper or lower bounds. This will notify you when it's time to take action, such as adding or removing liquidity, or rebalancing your position.
Monitor the performance of your strategy over time. Adjust the input parameters as needed to optimize your returns and manage risk effectively.
(Optional) Integrate the script with a trading bot or automation platform. If you're using an API-based trading solution, you can incorporate the logic and conditions from the script into your bot's algorithm to automate the process of providing concentrated liquidity and rebalancing your positions.
Remember that no strategy is foolproof, and past performance is not indicative of future results. Always exercise caution when trading and carefully consider your risk tolerance.
JS-TechTrading: VWAP Momentum_Pullback StrategyGeneral Description and Unique Features of this Script
Introducing the VWAP Momentum-Pullback Strategy (long-only) that offers several unique features:
1. Our script/strategy utilizes Mark Minervini's Trend-Template as a qualifier for identifying stocks and other financial securities in confirmed uptrends.
NOTE: In this basic version of the script, the Trend-Template has to be used as a separate indicator on TradingView (Public Trend-Template indicators are available on TradingView – community scripts). It is recommended to only execute buy signals in case the stock or financial security is in a stage 2 uptrend, which means that the criteria of the trend-template are fulfilled.
2. Our strategy is based on the supply/demand balance in the market, making it timeless and effective across all timeframes. Whether you are day trading using 1- or 5-min charts or swing-trading using daily charts, this strategy can be applied and works very well.
3. We have also integrated technical indicators such as the RSI and the MA / VWAP crossover into this strategy to identify low-risk pullback entries in the context of confirmed uptrends. By doing so, the risk profile of this strategy and drawdowns are being reduced to an absolute minimum.
Minervini’s Trend-Template and the ‘Stage-Analysis’ of the Markets
This strategy is a so-called 'long-only' strategy. This means that we only take long positions, short positions are not considered.
The best market environment for such strategies are periods of stable upward trends in the so-called stage 2 - uptrend.
In stable upward trends, we increase our market exposure and risk.
In sideways markets and downward trends or bear markets, we reduce our exposure very quickly or go 100% to cash and wait for the markets to recover and improve. This allows us to avoid major losses and drawdowns.
This simple rule gives us a significant advantage over most undisciplined traders and amateurs!
'The Trend is your Friend'. This is a very old but true quote.
What's behind it???
• 98% of stocks made their biggest gains in a Phase 2 upward trend.
• If a stock is in a stable uptrend, this is evidence that larger institutions are buying the stock sustainably.
• By focusing on stocks that are in a stable uptrend, the chances of profit are significantly increased.
• In a stable uptrend, investors know exactly what to expect from further price developments. This makes it possible to locate low-risk entry points.
The goal is not to buy at the lowest price – the goal is to buy at the right price!
Each stock goes through the same maturity cycle – it starts at stage 1 and ends at stage 4
Stage 1 – Neglect Phase – Consolidation
Stage 2 – Progressive Phase – Accumulation
Stage 3 – Topping Phase – Distribution
Stage 4 – Downtrend – Capitulation
This strategy focuses on identifying stocks in confirmed stage 2 uptrends. This in itself gives us an advantage over long-term investors and less professional traders.
By focusing on stocks in a stage 2 uptrend, we avoid losses in downtrends (stage 4) or less profitable consolidation phases (stages 1 and 3). We are fully invested and put our money to work for us, and we are fully invested when stocks are in their stage 2 uptrends.
But how can we use technical chart analysis to find stocks that are in a stable stage 2 uptrend?
Mark Minervini has developed the so-called 'trend template' for this purpose. This is an essential part of our JS-TechTrading pullback strategy. For our watchlists, only those individual values that meet the tough requirements of Minervini's trend template are eligible.
The Trend Template
• 200d MA increasing over a period of at least 1 month, better 4-5 months or longer
• 150d MA above 200d MA
• 50d MA above 150d MA and 200d MA
• Course above 50d MA, 150d MA and 200d MA
• Ideally, the 50d MA is increasing over at least 1 month
• Price at least 25% above the 52w low
• Price within 25% of 52w high
• High relative strength according to IBD.
NOTE: In this basic version of the script, the Trend-Template has to be used as a separate indicator on TradingView (Public Trend-Template indicators are available in TradingView – community scripts). It is recommended to only execute buy signals in case the stock or financial security is in a stage 2 uptrend, which means that the criteria of the trend-template are fulfilled.
This strategy can be applied to all timeframes from 5 min to daily.
The VWAP Momentum-Pullback Strateg y
For the JS-TechTrading VWAP Momentum-Pullback Strategy, only stocks and other financial instruments that meet the selected criteria of Mark Minervini's trend template are recommended for algorithmic trading with this startegy.
A further prerequisite for generating a buy signals is that the individual value is in a short-term oversold state (RSI).
When the selling pressure is over and the continuation of the uptrend can be confirmed by the MA / VWAP crossover after reaching a price low, a buy signal is issued by this strategy.
Stop-loss limits and profit targets can be set variably.
Relative Strength Index (RSI)
The Relative Strength Index (RSI) is a technical indicator developed by Welles Wilder in 1978. The RSI is used to perform a market value analysis and identify the strength of a trend as well as overbought and oversold conditions. The indicator is calculated on a scale from 0 to 100 and shows how much an asset has risen or fallen relative to its own price in recent periods.
The RSI is calculated as the ratio of average profits to average losses over a certain period of time. A high value of the RSI indicates an overbought situation, while a low value indicates an oversold situation. Typically, a value > 70 is considered an overbought threshold and a value < 30 is considered an oversold threshold. A value above 70 signals that a single value may be overvalued and a decrease in price is likely , while a value below 30 signals that a single value may be undervalued and an increase in price is likely.
For example, let's say you're watching a stock XYZ. After a prolonged falling movement, the RSI value of this stock has fallen to 26. This means that the stock is oversold and that it is time for a potential recovery. Therefore, a trader might decide to buy this stock in the hope that it will rise again soon.
The MA / VWAP Crossover Trading Strategy
This strategy combines two popular technical indicators: the Moving Average (MA) and the Volume Weighted Average Price (VWAP). The MA VWAP crossover strategy is used to identify potential trend reversals and entry/exit points in the market.
The VWAP is calculated by taking the average price of an asset for a given period, weighted by the volume traded at each price level. The MA, on the other hand, is calculated by taking the average price of an asset over a specified number of periods. When the MA crosses above the VWAP, it suggests that buying pressure is increasing, and it may be a good time to enter a long position. When the MA crosses below the VWAP, it suggests that selling pressure is increasing, and it may be a good time to exit a long position or enter a short position.
Traders typically use the MA VWAP crossover strategy in conjunction with other technical indicators and fundamental analysis to make more informed trading decisions. As with any trading strategy, it is important to carefully consider the risks and potential rewards before making any trades.
This strategy is applicable to all timeframes and the relevant parameters for the underlying indicators (RSI and MA/VWAP) can be adjusted and optimized as needed.
Backtesting
Backtesting gives outstanding results on all timeframes and drawdowns can be reduced to a minimum level. In this example, the hourly chart for MCFT has been used.
Settings for backtesting are:
- Period from April 2020 until April 2021 (1 yr)
- Starting capital 100k USD
- Position size = 25% of equity
- 0.01% commission = USD 2.50.- per Trade
- Slippage = 2 ticks
Other comments
• This strategy has been designed to identify the most promising, highest probability entries and trades for each stock or other financial security.
• The RSI qualifier is highly selective and filters out the most promising swing-trading entries. As a result, you will normally only find a low number of trades for each stock or other financial security per year in case you apply this strategy for the daily charts. Shorter timeframes will result in a higher number of trades / year.
• As a result, traders need to apply this strategy for a full watchlist rather than just one financial security.
Advanced Price Direction AlgorithmPrices can go up or down or falter in their movement.
This code evaluates this by looking at two consecutive bars or sets of bars.
If you put the set size to 1, the current and previous bar is evaluated.
If put to 2, the last2 and the 2 before these are evaluated.
Default is 12 because this seems to coincide with trend changes.
This code provides an advanced way to evaluate what the price does in a sort of three-value Boolean with the values up, down or falter.
I use this code in indicators I develop where price direction is taken into account.
The simple output makes it possible to use it as an indicator on its own.
Weis V5 zigzag jayySomehow, I deleted version 5 of the zigzag script. Same name. I have added some older notes describing how the Weis Wave works.
I have also changed the date restriction that stopped the script from working after Dec 31, 2022.
What you see here is the Weis zigzag wave plotted directly on the price chart. This script is the companion to the Weis cumulative wave volume script.
What is a Weis wave? David Weis has been recognized as a Wyckoff method analyst he has written two books one of which, Trades About to Happen, describes the evolution of the now-popular Weis wave. The method employed by Weis is to identify waves of price action and to compare the strength of the waves on characteristics of wave strength. Chief among the characteristics of strength is the cumulative volume of the wave. There are other markers that Weis uses as well for example how the actual price difference between the start of the Weis wave from start to finish. Weis also uses time, particularly when using a Renko chart
David Weis did a futures io video which is a popular source of information about his method. (Search David Weis and futures.io. I strongly suggest you also read “Trades About to Happen” by David Weis.
This will get you up and running more quickly when studying charts. However, you should choose the Traditional method to be true to David Weis technique as described in his book "Trades About to Happen" and in the Futures IO Webcast featuring David Weis
. The Weis pip zigzag wave shows how far in terms of bar close price a Weis wave has traveled through the duration of a Weis wave. The Weis zigzag wave is used in combination with the Weis cumulative volume wave. The two waves should be set to the same "wave size".
To use this script, you must set the wave size: Using the traditional Weis method simply enter the desired wave size in the box "How should wave size be calculated", in this example I am using a traditional wave size of .25. Each wave for each security and each timeframe requires its own wave size. Although not the traditional method devised by David Weis a more automatic way to set wave size would be to use Average True Range (ATR). Using ATR is not the true Weis method but it does give you similar waves and, importantly, without the hassle described above. Once the Weis wave size is set then the zigzag wave will be shown with volume. Because Weis used the closing price of a wave to define waves a line Bar highs and bar lows are not captured by the Weis Wave. The default script setting is now cumulative volume waves using an ATR of 7 and a multiplication factor of .5.
To display volume in a way that does not crowd out neighbouring volumes Weis displayed volume as a maximum of 3 digits (usually). Consider two Weis Wave volumes 176,895,570 and 2,654,763,889. To display wave volume as three digits it is necessary to take a number such as 176,895,570 and truncate it. 176,895,570 can be represented as 177 X 10 to the power of 6. The number displayed must also be relative to other numbers in the field. If the highest volume on the page is: 2,654,763,889 and with only three numbers available to display the result the value shown must be 265 (265 X 10 to the power of 7). Since 176,895,570 is an order of magnitude smaller than 2,654,763,889 therefore 175,895,570 must be shown as 18 instead of 177. In this way, the relative magnitudes of the two volumes can be understood. All numbers in the field of view must be truncated by the same order of magnitude to make the relative volumes understandable. The script attempts to calculate the order of magnitude value automatically. If you see a red number in the field of view it means the script has failed to do the calculation automatically and you should use the manual method – use the dialogue box “Calculate truncated wave value automatically or manually”. Scroll down from the automatic method and select manual. Once "manual" is selected the values displayed become the power values or multipliers for each wave.
Using the manual method you will select a “Multiplier” in the next dialogue box. Scan the field and select the largest value in the field of view (visible chart) is the multiplier of interest. If you select a lower number than the maximum value will see at least one red “up”. If you are too high you will see at least one red “down”. Scroll in the direction recommended or the values on the screen will be totally incorrect. With volume truncated to the highest order values, the eye can quickly get a feel for relative volumes. It also reduces the crowding and overlapping of values on the screen. You can opt to show the full volume to help get a sense of the magnitude of the true volumes.
How does the script determine if a Weis wave is continuing to grow or not?
The script evaluates the closing price of each new bar relative to the "Weis wave size". Suppose the current bar closes at a new low close, within the current down wave, at $30.00. If the Weis wave size is $0.10 then the algorithm will remember the $30.00 close and compare it to the close of the next bar. If the bar close price does not close equal to or lower than $30.00 or close equal to or higher than $30.10 then the wave is still a down wave with a current low of $30.00. This is true even if the bar low is less than $30.00 or the bar high is greater than 30.10 – only the bar’s closing price matters. If a bar's closing price climbs back up to a close of $30.11 then because the closing price has moved more than $0.10 (the Weis wave size) then that is a wave reversal with a new up-trending wave. In the above example if there was currently a downward trending wave and the bar closes were as follows $30.00, $30.09, $30.01, $30.05, $30.10 The wave direction would continue to stay downward trending until the close of $30.10 was achieved. As such $30.00 would be the low and the following closes $30.09, $30.01, $30.05 would be allocated to the new upward-trending wave. If however There was a series of bar closes like this $30.00, $30.09, $30.01, $30.05, $29.99 since none of the closes was equal to above the 10-cent reversal target of $30.10 but instead, a new Weis wave low was achieved ($29.99). As such the closes of $30.09, $30.01, $30.05 would all be attributed to the continued down-trending wave with a current low of $29.99, even though the closing price for the interim bars was above $30.00. Now that the Weis Wave low is now 429.99 then, in order to reverse this continued downtrend price will need to close at or above $30.09 on subsequent bar closes assuming now new low bar close is achieved. With large wave sizes, wave direction can be in limbo for many bars before a close either renews wave direction or reverses it and confirms wave direction as either a reversal or a continuation. On the zig-zag, a wave line and its volume will not be "printed" until a wave reversal is confirmed.
The wave attribution is similar when using other methods to define wave size. If ATR is used for wave size instead of a traditional wave constant size such as $0.10 or $2 or 2000 pips or ... then the wave size is calculated based on current ATR instead of the Weis wave constant (Traditional selected value).
I have the option to display pseudo-Ord volume. In truth, Ord used more traditional zig-zag pivots of bar highs and lows. Waves using closes as pivots can have some significant differences. This difference can be lessened by using smaller time frames and larger wave sizes.
There are other options such to display the delta price or pip size of a Weis Wave, the number of bars in a wave, and a few other options.
ICT Implied Fair Value Gap (IFVG) [LuxAlgo]An Implied Fair Value Gap (IFVG) is a three candles imbalance formation conceptualized by ICT that is based on detecting a larger candle body & then measuring the average between the two adjacent candle shadows.
This indicator automatically detects this imbalance formation on your charts and can be extended by a user set number of bars.
The IFVG average can also be extended until a new respective IFVG is detected, serving as a support/resistance line.
Alerts for the detection of bullish/bearish IFVG's are also included in this script.
🔶 SETTINGS
Shadow Threshold %: Threshold percentage used to filter out IFVG's with low adjacent candles shadows.
IFVG Extension: Number of bars used to extend highlighted IFVG's areas.
Extend Averages: Extend IFVG's averages up to a new detected respective IFVG.
🔶 USAGE
Users of this indicator can primarily find it useful for trading imbalances just as they would for trading regular Fair Value Gaps or other imbalances, which aims to highlight a disparity between supply & demand.
For trading a bullish IFVG, users can find this imbalance as an area where price is likely to fill or act as an area of support.
In the same way, a user could trade bearish IFVGs by seeing it as a potential area to be filled or act as resistance within a downtrend.
Users can also extend the IFVG averages and use them as longer-term support/resistances levels. This can highlight the ability of detected IFVG to provide longer term significant support and resistance levels.
🔶 DETAILS
Various methods have been proposed for the detection of regular FVG's, and as such it would not be uncommon to see various methods for the implied version.
We propose the following identification rules for the algorithmic detection of IFVG's:
🔹 Bullish
Central candle body is larger than the body of the adjacent candles.
Current price low is higher than high price two bars ago.
Current candle lower shadow makes up more than p percent of its total candle range.
Candle upper shadow two bars ago makes up more than p percent of its total candle range.
The average of the current candle lower shadow is greater than the average of the candle upper shadow two bars ago.
where p is the user set threshold.
🔹 Bearish
Central candle body is larger than the body of the adjacent candles.
Current price high is higher than low price two bars ago.
Current candle upper shadow makes up more than p percent of its total candle range.
Candle lower shadow two bars ago makes up more than p percent of its total candle range.
The average of the candle lower shadow 2 bars ago is greater than the average of the current candle higher shadow.
where p is the user set threshold.
🔶 SUPPLEMENTARY MATERIAL
You can see our previously posted script that detects various imbalances as well as regular Fair Value Gaps which have very similar usability to Implied Fair Value Gaps here:
Vector2FunctionClipLibrary "Vector2FunctionClip"
Sutherland-Hodgman polygon clipping algorithm.
reference:
.
rosettacode.org
.
clip(source, reference)
Perform Clip operation on a vector with another.
Parameters:
source : array . Source polygon to be clipped.
reference : array . Reference polygon to clip source.
Returns: array.
A New Adaptive Moving Average [CC]The New Adaptive Moving Average was created by Scott Cong (Stocks and Commodities Mar 2023) and his idea was to focus on the Adaptive Moving Average created by Perry Kaufman and to try to improve it by introducing a concept of effort vs results. In this case the effort would be the total range of the underlying price action since each bar is essentially a war of the bulls vs the bears. The result would be the total range of the close so we are looking for the highest close and lowest close in that same time period. This gives us an alpha that we can use to plug into the Kaufman Adaptive Moving Average algorithm which gives us a brand new indicator that can hug the price just enough to allow us to ride the stock up or down. I have color coded it to be darker colors when it is a strong signal and lighter colors when it is a normal signal. Buy when the line turns green and sell when it turns red.
Let me know if there are any other indicators you would like to see me publish!
Recursive Zigzag [Trendoscope]Here is an another outcome of Object Oriented Zigzag and Pattern Ecosystem of Libraries.
We already have another implementation of recursive zigzag which makes use of earlier library rzigzag . Here in this example, we make use of similar logic but leverage the new type and method based Zigzag system libraries to derive the indicator.
🎲 Design Overview
Similar to Recursive Auto Pitchfork, here too the indicator code is around 50 lines. Whereas most of the heavy lifting is done by the libraries.
🎲 Base Libraries
Base libraries are those which does not have any dependency. They form basic structures which are later used in other libraries. These libraries need to be crafted carefully so that minimal updates are done later on. Any updates on these libraries will impact all the dependent libraries and scripts.
🎯 Drawing
DrawingTypes - Defines basic drawing types Point, Line, Label, Box, Linefill and related property types.
DrawingMethods - All the methods or functionality surrounding Basic types are defined here.
🎲 Layer 1 Libraries
These are the libraries which has direct dependency on base libraries.
🎯 Zigzag
ZigzagTypes - Types required for defining Zigzag and Divergence
ZigzagMethods - Methods associated with Zigzag Type definitions.
🎲Indicator
Indicator draws zigzags based on given length. And then recursively derives next level zigzags based on previous levels. As per the utility, indicator is useful in several ways
Visualising price structure based on zigzag pivots - which in turn can help visualise patterns.
Ability to add any oscillator makes it easy to spot divergences with choice of indicators.
Programmers can use the derived values to build complex algorithms such as automatic pattern recognition.
🎯 Settings
Settings are explained via tooltips. These are very much straight forward and directly related to zigzag, oscillators and divergence.
BE - Golden Cross Crude KeyTraders, i have been observing crude oil for about 3 months now and somehow I can see that crude is respecting 42 days Moving average and crosses have created massive spikes most of the time.
However you need to be mindful of the time to trade and timeframe since not all crosses creates spikes.
Note: I have been testing on a 15min timeframe.
Keeping this in mind, this indicator is a automated solution which takes trade entries on crosses plus buffer and exits based on the specified Sl type.
Enjoy!
DISCLAIMER: No sharing, copying, reselling, modifying, or any other forms of use are authorized for our documents, script / strategy, and the information published with them. This informational planning script / strategy is strictly for individual use and educational purposes only. This is not financial or investment advice. Investments are always made at your own risk and are based on your personal judgement. I am not responsible for any losses you may incur. Please invest wisely.
Happy to receive suggestions and feedback in order to improve the performance of the indicator better.
ICT MacrosThis script allows traders to visualize the range of time when a macro (an automated series of instructions/trades from large fund traders, executed by an algorithm) will likely occur in the market. It does this by drawing vertical lines and labels on the chart at these specific times:
(Macro Open) - 9:50 AM EST
(Macro Close) - 10:10 AM EST
(Macro Open) - 10:50 AM EST
(Macro Close) - 11:10 AM EST
(Macro Open) - 1:10 PM EST
(Macro Close) - 1:40 PM EST
(Macro Open) - 3:15 PM EST
(Macro Close) - 3:45 PM EST
The theory behind the use of these macros - is that the market will either seek buy side or sell side liquidity, or seek to rebalance price at a point of interest in between the open and close of the macro. Traders who follow this theory can use that information to anticipate how price might behave.
When a macro occurs, the script draws a vertical line on the chart using a dotted line style with a user-defined color. Additionally, a label is placed above the line to indicate whether it is a Macro Open or Macro Close event.
To preserve space, the labels are abbreviated on chart - "Macro Open" (M.O.) and "Macro Close" (M.C.) for both the morning and afternoon trading sessions. The labels may be turned on/off by the user.
The script also includes alerts that can notify traders when a macro occurs. These alerts can be set to go off once per bar close, and the alert message indicates the specific macro type and time.
This script is entirely open-source, meaning that traders can read the code and modify it as needed. Credit to the foundation of this script goes to TradingView user @rickyzcarroll for his open source Strat Assistant Hour Flip script. Important changes include the specific time changes and alert function.
Investments/swing trading strategy for different assetsStop worrying about catching the lowest price, it's almost impossible!: with this trend-following strategy and protection from bearish phases, you will know how to enter the market properly to obtain benefits in the long term.
Backtesting context: 1899-11-01 to 2023-02-16 of SPX by Tvc. Commissions: 0.05% for each entry, 0.05% for each exit. Risk per trade: 2.5% of the total account
For this strategy, 5 indicators are used:
One Ema of 200 periods
Atr Stop loss indicator from Gatherio
Squeeze momentum indicator from LazyBear
Moving average convergence/divergence or Macd
Relative strength index or Rsi
Trade conditions:
There are three type of entries, one of them depends if we want to trade against a bearish trend or not.
---If we keep Against trend option deactivated, the rules for two type of entries are:---
First type of entry:
With the next rules, we will be able to entry in a pull back situation:
Squeeze momentum is under 0 line (red)
Close is above 200 Ema and close is higher than the past close
Histogram from macd is under 0 line and is higher than the past one
Once these rules are met, we enter into a buy position. Stop loss will be determined by atr stop loss (white point) and break even(blue point) by a risk/reward ratio of 1:1.
For closing this position: Squeeze momentum crosses over 0 and, until squeeze momentum crosses under 0, we close the position. Otherwise, we would have closed the position due to break even or stop loss.
Second type of entry:
With the next rules, we will not lose a possible bullish movement:
Close is above 200 Ema
Squeeze momentum crosses under 0 line
Once these rules are met, we enter into a buy position. Stop loss will be determined by atr stop loss (white point) and break even(blue point) by a risk/reward ratio of 1:1.
Like in the past type of entry, for closing this position: Squeeze momentum crosses over 0 and, until squeeze momentum crosses under 0, we close the position. Otherwise, we would have closed the position due to break even or stop loss.
---If we keep Against trend option activated, the rules are the same as the ones above, but with one more type of entry. This is more useful in weekly timeframes, but could also be used in daily time frame:---
Third type of entry:
Close is under 200 Ema
Squeeze momentum crosses under 0 line
Once these rules are met, we enter into a buy position. Stop loss will be determined by atr stop loss (white point) and break even(blue point) by a risk/reward ratio of 1:1.
Like in the past type of entries, for closing this position: Squeeze momentum crosses over 0 and, until squeeze momentum crosses under 0, we close the position. Otherwise, we would have closed the position due to break even or stop loss.
Risk management
For calculating the amount of the position you will use just a small percent of your initial capital for the strategy and you will use the atr stop loss for this.
Example: You have 1000 usd and you just want to risk 2,5% of your account, there is a buy signal at price of 4,000 usd. The stop loss price from atr stop loss is 3,900. You calculate the distance in percent between 4,000 and 3,900. In this case, that distance would be of 2.50%. Then, you calculate your position by this way: (initial or current capital * risk per trade of your account) / (stop loss distance).
Using these values on the formula: (1000*2,5%)/(2,5%) = 1000usd. It means, you have to use 1000 usd for risking 2.5% of your account.
We will use this risk management for applying compound interest.
In settings, with position amount calculator, you can enter the amount in usd of your account and the amount in percentage for risking per trade of the account. You will see this value in green color in the upper left corner that shows the amount in usd to use for risking the specific percentage of your account.
Script functions
Inside of settings, you will find some utilities for display atr stop loss, break evens, positions, signals, indicators, etc.
You will find the settings for risk management at the end of the script if you want to change something. But rebember, do not change values from indicators, the idea is to not over optimize the strategy.
If you want to change the initial capital for backtest the strategy, go to properties, and also enter the commisions of your exchange and slippage for more realistic results.
If you activate break even using rsi, when rsi crosses under overbought zone break even will be activated. This can work in some assets.
---Important: In risk managment you can find an option called "Use leverage ?", activate this if you want to backtest using leverage, which means that in case of not having enough money for risking the % determined by you of your account using your initial capital, you will use leverage for using the enough amount for risking that % of your acount in a buy position. Otherwise, the amount will be limited by your initial/current capital---
Some things to consider
USE UNDER YOUR OWN RISK. PAST RESULTS DO NOT REPRESENT THE FUTURE.
DEPENDING OF % ACCOUNT RISK PER TRADE, YOU COULD REQUIRE LEVERAGE FOR OPEN SOME POSITIONS, SO PLEASE, BE CAREFULL AND USE CORRECTLY THE RISK MANAGEMENT
Do not forget to change commissions and other parameters related with back testing results!
Some assets and timeframes where the strategy has also worked:
BTCUSD : 4H, 1D, W
SPX (US500) : 4H, 1D, W
GOLD : 1D, W
SILVER : 1D, W
ETHUSD : 4H, 1D
DXY : 1D
AAPL : 4H, 1D, W
AMZN : 4H, 1D, W
META : 4H, 1D, W
(and others stocks)
BANKNIFTY : 4H, 1D, W
DAX : 1D, W
RUT : 1D, W
HSI : 1D, W
NI225 : 1D, W
USDCOP : 1D, W
Recursive Auto-Pitchfork [Trendoscope]"Say Hi" to object oriented programming with Pinescript using types and methods. This is the beginning of new era of Pinescript where we are moving from isolated scripts containing indicator and strategies to whole ecosystem of Object Oriented Programming with libraries of highly reusable components. Those who are familiar with programming would have already realised how big these improvements are and what it brings to the table.
With this script, I am not just providing an indicator for traders but also an introduction for programmers on how to design and build object oriented components in Pinescript using types and methods. Big thanks to Tradingview and Pine development team for making this happen. We look forward for many such gifts in the future :)
🎲 Architecture
As mentioned before, we are not just building an indicator here. But, an ecosystem of components. Using Types and Methods we can visualise libraries as Classes. Thus, we can build an ecosystem of libraries in layered approach to enhance effective code reusability.
Generic architecture can be visualised as below
Coming to the specific case of Auto Pitchfork indicator, the indicator code is less than 50 lines for logic and around 100 lines of inputs. But, most of the heavy-lifting is done by the libraries underneath. Here is a snapshot of related libraries and how they are connected.
All libraries are divided into two portions.
Types - Contains only type definitions
Methods - Contains only method definitions related to the types defined in the Types library
Together, these libraries can be visualised as Class. Methods are defined in such a way all exported methods are related to Types and no other functions or features are defined. If we need further functionality which does not depend on the types, we need to do this via some other library and use them here. Similarly, we should not define any methods related to these types in other libraries.
Reason for splitting the libraries to types and methods is to enable updating methods without disturbing types. Since libraries create interdependencies due to versioning, it is best if we do less updates on the type definitions. Splitting the two enables adding more features while keeping the type definition version intact.
🎲 Base Libraries
Base libraries are those which does not have any dependency. They form basic structures which are later used in other libraries. These libraries need to be crafted carefully so that minimal updates are done later on. Any updates on these libraries will impact all the dependent libraries and scripts.
🎯 Drawing
DrawingTypes - Defines basic drawing types Point, Line, Label, Box, Linefill and related property types.
DrawingMethods - All the methods or functionality surrounding Basic types are defined here.
🎲 Layer 1 Libraries
These are the libraries which has direct dependency on base libraries.
🎯 Zigzag
ZigzagTypes - Types required for defining Zigzag and Divergence
ZigzagMethods - Methods associated with Zigzag Type definitions.
🎯Pitchfork
PitchforkTypes - Basic and Drawing Types for Pitchfork objects
PitchforkMethods - Methods associated with Pitchfork type definitions
🎲 Indicator and Settings
Indicator draws pitchfork based on recursive zigzag configurations. Recursive zigzag is derived with following logic:
Base level zigzag is calculated with regular zigzag algorithm with given length and depth
Next level zigzag is calculated based on base zigzag. And we recursively calculate higher level zigzags until we are left with 4 or less pivots or when no further reduction is possible
On every level of zigzag, we then check the last 3 pivots and draw pitchfork based on the retracement ratio.
Indicator settings are summarised in the tooltips and are as below.
Finally, big thanks to my partner @CryptoArch_ for bringing up the topic of pitchfork for our next development.
Dark Energy Divergence OscillatorThe Dark Energy Divergence Oscillator (DEDO)
What makes The Universe grow at an accelerating pace?
Dark Energy.
What makes The Economy grow at an accelerating pace?
Debt.
Debt is the Dark Energy of The Economy.
I pronounce DEDO "Deed-oh", but variations are fine with me.
Note: The Pine Script version of DEDO is improved from the original formula, which used a constant all-time high calculation in the normalization factor. This was technically not as accurate for calculating liquidity pressure in historical data because it meant that historical prices were being tested against future liquidity factors. Now using Pine, the functions can be normalized for the bar at the time of calculation, so the liquidity factors are normalized per candle, not across the entire series, which feels like an improvement to me.
Thought Process:
It's all about the liquidity. What I started with is a correlation between major stock indices such as SPX and WRESBAL , a balance sheet metric on FRED
After September 2008, when QE was initiated, many asset valuations started to follow more closely with liquidity factors. This led me to create a function that could combine asset prices and liquidity in WRESBAL , in order to calculate their divergence and chart the signal in TradingView.
The original formula:
First, we don't want "non-QE" data. we only want data for the market affected by QE .
So, find SPX on the day of pre-QE: 1255.08 and subtract that from the 2022 top 4818.62 = 3563.54
With this post-QE SPX range, now you can normalize the price level simply by dividing by the range = ( SPX -1255.08)/3563.54)
Normalization produces values from 0 to 1 so that they can be compared with other normalized figures.
In order to test the 0 to 1 normalized SPX range measure against the liquidity number, WRESBAL , it's the same idea: normalize it using the max as the denominator and you get a 0 to 1 liquidity index:
( WRESBAL /4276000000000)
Subtract one from the other to get the divergence:
(( WRESBAL /4276000000000)-(( SPX -1255.08)/3563.54))*10
x10 to reduce decimal places, but this option is configurable in DEDO's input settings tab.
Positive values indicate there's ample liquidity to hold up price or even create bullish momentum in some cases. Negative values mean price levels are potentially extended beyond what liquidity levels can support.
Note: many viewers of the charts on social media wanted the values to go down in alignment with price moving down, so inverting the chart is what I do with Option + I. I like the fact that negative values represent a deficit in liquidity to hold up price but that's just me.
Now with Pine Script and some help from other liquidity focused accounts on TradingView , I was able to derive a script that includes central bank liquidity and Reverse Repo liquidity drain, all in one algorithm, with adjustable settings.
Central bank assets included in this version:
-JPY (Japan)
-CNY (China)
-UK (British Pound)
-SNB (Swiss National Bank)
-ECB (European Central Bank )
Central Bank assets can be adjusted to an allocation % so that the formula is adjusted for the market cap of the asset.
A handy table in the lower right corner displays useful information about the asset market cap, and percentage it represents in the liquidity pool.
Reverse repo soak is also an optional addition in the Input settings using the RRPONTSYD value from FRED. This value is subtracted from global liquidity used to determine divergence since it is swept away from markets when residing in the Fed's reverse repo facility.
There is an option to draw a line at the Zero bound. This provides a convenience so that the line doesn't keep having to be redrawn on every chart. The normalized equation produces a value that should oscillate around zero, as price/valuation grows past liquidity support, falls under it, and repeats in cycles.
Spoofing Detector with VPOC [CHE]"We're keeping an eye on the market makers, zooming in for a closer look."
Spoofing and Volume Point of Control (VPOC) are terms used in the context of market manipulation and market analysis in financial markets.
A spoofing detector is a tool developed to detect the spoofing of orders. Spoofing refers to a practice where a market participant places large orders to deceive other market participants and influence the price of a stock. These large orders, however, are not executed but cancelled shortly after, creating a false demand for a specific stock and influencing the price. A spoofing detector can use algorithms to detect and report these practices to maintain the integrity of the market.
The Volume Point of Control (VPOC) is a concept in technical analysis aimed at identifying the key price level at which a stock was bought and sold. VPOC is calculated by analyzing the volume data of a stock and determining the price level at which the largest volume was traded for a specific period. This price level can serve as an indicator of the current market trend and market interest in a specific stock.
There is a substantive connection between a spoofing detector and VPOC because both tools can be used to gain a better understanding of the stock markets and detect potential forms of market manipulation. For example, VPOC can be used as an indicator of potential market manipulation when an abnormal distribution of trading volume is observed at a specific price level. A spoofing detector can then be used to detect and report these activities.
Pine Script Indicator Analysis:
This is a Pine Script code for a spoofing detector and volume point of control (VPOC) indicator. The purpose of the indicator is to detect and highlight potential spoofing activities in the market, as well as to plot the volume point of control on the chart.
Inputs:
Median Lookback: This input defines the length of the median calculation, with a default value of 25.
Range To Edges Threshold: This input sets a threshold value for the range to edges calculation, with a default value of 200.
Multiplier 1: This input sets a multiplier value to be used in the average true range calculation, with a default value of 0.8.
Multipler 2: This input sets a multiplier value to be used in the average true range calculation, with a default value of 2.0.
Multipler 3: This input sets a multiplier value to be used in the average true range calculation, with a default value of 3.0.
Variables:
y, x, ds, os: These are arrays and a variable used for the first part of the spoofing detection process.
y1, x1, ds1, os1: These are arrays and a variable used for the second part of the spoofing detection process.
y2, x2, ds2, os2: These are arrays and a variable used for the third part of the spoofing detection process.
Calculation:
The code starts by defining some variables, such as the bar index (n), the close price (src), and the average true range (atr) with different multipliers.
Next, the median of the close price is calculated over the lookback period specified by the "Median Lookback" input.
Then, the difference between the current median and the previous median is calculated, and the value is compared with the average true range with different multipliers to determine the state of the market (up, down, or unchanged).
The code then checks if the state has changed from the previous bar, and if so, the code performs a spoofing detection calculation.
The spoofing detection calculation involves determining the range between the first and last bar in the median calculation, and dividing it by the sum of the absolute differences calculated earlier. If the result is below the "Range To Edges Threshold" input, the code plots a line and a label on the chart indicating a potential spoofing activity.
The process is repeated for each of the three parts of the spoofing detection process.
VPOC:
The VPOC code is used to calculate the Volume Point of Control (VPOC) on a chart. The VPOC is the price level with the highest volume over a specified lookback period. The script contains several functions and inputs that allow the user to customize the calculation.
Inputs:
i_source: This input allows the user to specify the source for the VPOC price calculation. The options are the close price of the bar.
i_vpocThreshold: This input allows the user to set the threshold percentage for the VPOC highlight.
Functions:
timeStep_translate(): This function returns a string representing the time step of the lower time frame based on the current time frame of the chart.
ltfStats(): This function returns an array of the source and volume of the lower time frame.
ltfSrc, ltfVolume: This line requests the lower time frame data using the request.security_lower_tf function, with the lower time frame step calculated by the timeStep_translate() function.
maxVolume and indexOfMaxVolume: These variables store the maximum volume value and its corresponding index in the ltfVolume array.
maxVol: This variable stores the source value corresponding to the maximum volume.
vpocThresholdMet: This variable is a boolean that is true when the volume at the maximum volume price level is greater than or equal to the threshold percentage of the total volume.
vpocColor: This variable stores the color for the VPOC plot.
vh: This variable stores the highest volume in the lookback period.
plotshape(): This function plots the VPOC on the chart. The shape will be plotted only if the volume is greater than the specified threshold percentage of the highest volume in the lookback period. The shape will be labeled with the text "VC".
Overall, this script calculates the VPOC for a chart by aggregating volume data from a lower time frame and plotting a shape at the price level with the highest volume. The user can specify the source for the VPOC calculation and the threshold percentage for the VPOC highlight.
Important: VPOC shows everything in real time as a leading indicator, the triple spoofing detector is trailing
Best regards
Chervolino
[blackcat] L1 Chop ZonesLevel: 1
Background
I was inspired by NILX's "Tool: Chop & Trade Zones". This can used as an element for trading system control.
Function
I use my own customized algorithm to replace that core of NILX one, which is targetting to provide smoother and trend for chop and trend judgement.
Since it is quite differnt now but an oscillator within range of 0~100. The pro is it can use the contstant threshold values for all time frames and all trading pairs now.
Remarks
Feedbacks are appreciated.
Any Oscillator Underlay [TTF]We are proud to release a new indicator that has been a while in the making - the Any Oscillator Underlay (AOU) !
Note: There is a lot to discuss regarding this indicator, including its intent and some of how it operates, so please be sure to read this entire description before using this indicator to help ensure you understand both the intent and some limitations with this tool.
Our intent for building this indicator was to accomplish the following:
Combine all of the oscillators that we like to use into a single indicator
Take up a bit less screen space for the underlay indicators for strategies that utilize multiple oscillators
Provide a tool for newer traders to be able to leverage multiple oscillators in a single indicator
Features:
Includes 8 separate, fully-functional indicators combined into one
Ability to easily enable/disable and configure each included indicator independently
Clearly named plots to support user customization of color and styling, as well as manual creation of alerts
Ability to customize sub-indicator title position and color
Ability to customize sub-indicator divider lines style and color
Indicators that are included in this initial release:
TSI
2x RSIs (dubbed the Twin RSI )
Stochastic RSI
Stochastic
Ultimate Oscillator
Awesome Oscillator
MACD
Outback RSI (Color-coding only)
Quick note on OB/OS:
Before we get into covering each included indicator, we first need to cover a core concept for how we're defining OB and OS levels. To help illustrate this, we will use the TSI as an example.
The TSI by default has a mid-point of 0 and a range of -100 to 100. As a result, a common practice is to place lines on the -30 and +30 levels to represent OS and OB zones, respectively. Most people tend to view these levels as distance from the edges/outer bounds or as absolute levels, but we feel a more way to frame the OB/OS concept is to instead define it as distance ("offset") from the mid-line. In keeping with the -30 and +30 levels in our example, the offset in this case would be "30".
Taking this a step further, let's say we decided we wanted an offset of 25. Since the mid-point is 0, we'd then calculate the OB level as 0 + 25 (+25), and the OS level as 0 - 25 (-25).
Now that we've covered the concept of how we approach defining OB and OS levels (based on offset/distance from the mid-line), and since we did apply some transformations, rescaling, and/or repositioning to all of the indicators noted above, we are going to discuss each component indicator to detail both how it was modified from the original to fit the stacked-indicator model, as well as the various major components that the indicator contains.
TSI:
This indicator contains the following major elements:
TSI and TSI Signal Line
Color-coded fill for the TSI/TSI Signal lines
Moving Average for the TSI
TSI Histogram
Mid-line and OB/OS lines
Default TSI fill color coding:
Green : TSI is above the signal line
Red : TSI is below the signal line
Note: The TSI traditionally has a range of -100 to +100 with a mid-point of 0 (range of 200). To fit into our stacking model, we first shrunk the range to 100 (-50 to +50 - cut it in half), then repositioned it to have a mid-point of 50. Since this is the "bottom" of our indicator-stack, no additional repositioning is necessary.
Twin RSI:
This indicator contains the following major elements:
Fast RSI (useful if you want to leverage 2x RSIs as it makes it easier to see the overlaps and crosses - can be disabled if desired)
Slow RSI (primary RSI)
Color-coded fill for the Fast/Slow RSI lines (if Fast RSI is enabled and configured)
Moving Average for the Slow RSI
Mid-line and OB/OS lines
Default Twin RSI fill color coding:
Dark Red : Fast RSI below Slow RSI and Slow RSI below Slow RSI MA
Light Red : Fast RSI below Slow RSI and Slow RSI above Slow RSI MA
Dark Green : Fast RSI above Slow RSI and Slow RSI below Slow RSI MA
Light Green : Fast RSI above Slow RSI and Slow RSI above Slow RSI MA
Note: The RSI naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Stochastic and Stochastic RSI:
These indicators contain the following major elements:
Configurable lengths for the RSI (for the Stochastic RSI only), K, and D values
Configurable base price source
Mid-line and OB/OS lines
Note: The Stochastic and Stochastic RSI both have a normal range of 0 to 100 with a mid-point of 50, so no rescaling or transformations are done on either of these indicators. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Ultimate Oscillator (UO):
This indicator contains the following major elements:
Configurable lengths for the Fast, Middle, and Slow BP/TR components
Mid-line and OB/OS lines
Moving Average for the UO
Color-coded fill for the UO/UO MA lines (if UO MA is enabled and configured)
Default UO fill color coding:
Green : UO is above the moving average line
Red : UO is below the moving average line
Note: The UO naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Awesome Oscillator (AO):
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the AO calculation
Configurable price source for the moving averages used in the AO calculation
Mid-line
Option to display the AO as a line or pseudo-histogram
Moving Average for the AO
Color-coded fill for the AO/AO MA lines (if AO MA is enabled and configured)
Default AO fill color coding (Note: Fill was disabled in the image above to improve clarity):
Green : AO is above the moving average line
Red : AO is below the moving average line
Note: The AO is technically has an infinite (unbound) range - -∞ to ∞ - and the effective range is bound to the underlying security price (e.g. BTC will have a wider range than SP500, and SP500 will have a wider range than EUR/USD). We employed some special techniques to rescale this indicator into our desired range of 100 (-50 to 50), and then repositioned it to have a midpoint of 50 (range of 0 to 100) to meet the constraints of our stacking model. We then do one final repositioning to place it in the correct position the indicator-stack based on which other indicators are also enabled. For more details on how we accomplished this, read our section "Binding Infinity" below.
MACD:
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the MACD calculation
Configurable price source for the moving averages used in the MACD calculation
Configurable length and calculation method for the MACD Signal Line calculation
Mid-line
Note: Like the AO, the MACD also technically has an infinite (unbound) range. We employed the same principles here as we did with the AO to rescale and reposition this indicator as well. For more details on how we accomplished this, read our section "Binding Infinity" below.
Outback RSI (ORSI):
This is a stripped-down version of the Outback RSI indicator (linked above) that only includes the color-coding background (suffice it to say that it was not technically feasible to attempt to rescale the other components in a way that could consistently be clearly seen on-chart). As this component is a bit of a niche/special-purpose sub-indicator, it is disabled by default, and we suggest it remain disabled unless you have some pre-defined strategy that leverages the color-coding element of the Outback RSI that you wish to use.
Binding Infinity - How We Incorporated the AO and MACD (Warning - Math Talk Ahead!)
Note: This applies only to the AO and MACD at time of original publication. If any other indicators are added in the future that also fall into the category of "binding an infinite-range oscillator", we will make that clear in the release notes when that new addition is published.
To help set the stage for this discussion, it's important to note that the broader challenge of "equalizing inputs" is nothing new. In fact, it's a key element in many of the most popular fields of data science, such as AI and Machine Learning. They need to take a diverse set of inputs with a wide variety of ranges and seemingly-random inputs (referred to as "features"), and build a mathematical or computational model in order to work. But, when the raw inputs can vary significantly from one another, there is an inherent need to do some pre-processing to those inputs so that one doesn't overwhelm another simply due to the difference in raw values between them. This is where feature scaling comes into play.
With this in mind, we implemented 2 of the most common methods of Feature Scaling - Min-Max Normalization (which we call "Normalization" in our settings), and Z-Score Normalization (which we call "Standardization" in our settings). Let's take a look at each of those methods as they have been implemented in this script.
Min-Max Normalization (Normalization)
This is one of the most common - and most basic - methods of feature scaling. The basic formula is: y = (x - min)/(max - min) - where x is the current data sample, min is the lowest value in the dataset, and max is the highest value in the dataset. In this transformation, the max would evaluate to 1, and the min would evaluate to 0, and any value in between the min and the max would evaluate somewhere between 0 and 1.
The key benefits of this method are:
It can be used to transform datasets of any range into a new dataset with a consistent and known range (0 to 1).
It has no dependency on the "shape" of the raw input dataset (i.e. does not assume the input dataset can be approximated to a normal distribution).
But there are a couple of "gotchas" with this technique...
First, it assumes the input dataset is complete, or an accurate representation of the population via random sampling. While in most situations this is a valid assumption, in trading indicators we don't really have that luxury as we're often limited in what sample data we can access (i.e. number of historical bars available).
Second, this method is highly sensitive to outliers. Since the crux of this transformation is based on the max-min to define the initial range, a single significant outlier can result in skewing the post-transformation dataset (i.e. major price movement as a reaction to a significant news event).
You can potentially mitigate those 2 "gotchas" by using a mechanism or technique to find and discard outliers (e.g. calculate the mean and standard deviation of the input dataset and discard any raw values more than 5 standard deviations from the mean), but if your most recent datapoint is an "outlier" as defined by that algorithm, processing it using the "scrubbed" dataset would result in that new datapoint being outside the intended range of 0 to 1 (e.g. if the new datapoint is greater than the "scrubbed" max, it's post-transformation value would be greater than 1). Even though this is a bit of an edge-case scenario, it is still sure to happen in live markets processing live data, so it's not an ideal solution in our opinion (which is why we chose not to attempt to discard outliers in this manner).
Z-Score Normalization (Standardization)
This method of rescaling is a bit more complex than the Min-Max Normalization method noted above, but it is also a widely used process. The basic formula is: y = (x – μ) / σ - where x is the current data sample, μ is the mean (average) of the input dataset, and σ is the standard deviation of the input dataset. While this transformation still results in a technically-infinite possible range, the output of this transformation has a 2 very significant properties - the mean (average) of the output dataset has a mean (μ) of 0 and a standard deviation (σ) of 1.
The key benefits of this method are:
As it's based on normalizing the mean and standard deviation of the input dataset instead of a linear range conversion, it is far less susceptible to outliers significantly affecting the result (and in fact has the effect of "squishing" outliers).
It can be used to accurately transform disparate sets of data into a similar range regardless of the original dataset's raw/actual range.
But there are a couple of "gotchas" with this technique as well...
First, it still technically does not do any form of range-binding, so it is still technically unbounded (range -∞ to ∞ with a mid-point of 0).
Second, it implicitly assumes that the raw input dataset to be transformed is normally distributed, which won't always be the case in financial markets.
The first "gotcha" is a bit of an annoyance, but isn't a huge issue as we can apply principles of normal distribution to conceptually limit the range by defining a fixed number of standard deviations from the mean. While this doesn't totally solve the "infinite range" problem (a strong enough sudden move can still break out of our "conceptual range" boundaries), the amount of movement needed to achieve that kind of impact will generally be pretty rare.
The bigger challenge is how to deal with the assumption of the input dataset being normally distributed. While most financial markets (and indicators) do tend towards a normal distribution, they are almost never going to match that distribution exactly. So let's dig a bit deeper into distributions are defined and how things like trending markets can affect them.
Skew (skewness): This is a measure of asymmetry of the bell curve, or put another way, how and in what way the bell curve is disfigured when comparing the 2 halves. The easiest way to visualize this is to draw an imaginary vertical line through the apex of the bell curve, then fold the curve in half along that line. If both halves are exactly the same, the skew is 0 (no skew/perfectly symmetrical) - which is what a normal distribution has (skew = 0). Most financial markets tend to have short, medium, and long-term trends, and these trends will cause the distribution curve to skew in one direction or another. Bullish markets tend to skew to the right (positive), and bearish markets to the left (negative).
Kurtosis: This is a measure of the "tail size" of the bell curve. Another way to state this could be how "flat" or "steep" the bell-shape is. If the bell is steep with a strong drop from the apex (like a steep cliff), it has low kurtosis. If the bell has a shallow, more sweeping drop from the apex (like a tall hill), is has high kurtosis. Translating this to financial markets, kurtosis is generally a metric of volatility as the bell shape is largely defined by the strength and frequency of outliers. This is effectively a measure of volatility - volatile markets tend to have a high level of kurtosis (>3), and stable/consolidating markets tend to have a low level of kurtosis (<3). A normal distribution (our reference), has a kurtosis value of 3.
So to try and bring all that back together, here's a quick recap of the Standardization rescaling method:
The Standardization method has an assumption of a normal distribution of input data by using the mean (average) and standard deviation to handle the transformation
Most financial markets do NOT have a normal distribution (as discussed above), and will have varying degrees of skew and kurtosis
Q: Why are we still favoring the Standardization method over the Normalization method, and how are we accounting for the innate skew and/or kurtosis inherent in most financial markets?
A: Well, since we're only trying to rescale oscillators that by-definition have a midpoint of 0, kurtosis isn't a major concern beyond the affect it has on the post-transformation scaling (specifically, the number of standard deviations from the mean we need to include in our "artificially-bound" range definition).
Q: So that answers the question about kurtosis, but what about skew?
A: So - for skew, the answer is in the formula - specifically the mean (average) element. The standard mean calculation assumes a complete dataset and therefore uses a standard (i.e. simple) average, but we're limited by the data history available to us. So we adapted the transformation formula to leverage a moving average that included a weighting element to it so that it favored recent datapoints more heavily than older ones. By making the average component more adaptive, we gained the effect of reducing the skew element by having the average itself be more responsive to recent movements, which significantly reduces the effect historical outliers have on the dataset as a whole. While this is certainly not a perfect solution, we've found that it serves the purpose of rescaling the MACD and AO to a far more well-defined range while still preserving the oscillator behavior and mid-line exceptionally well.
The most difficult parts to compensate for are periods where markets have low volatility for an extended period of time - to the point where the oscillators are hovering around the 0/midline (in the case of the AO), or when the oscillator and signal lines converge and remain close to each other (in the case of the MACD). It's during these periods where even our best attempt at ensuring accurate mirrored-behavior when compared to the original can still occasionally lead or lag by a candle.
Note: If this is a make-or-break situation for you or your strategy, then we recommend you do not use any of the included indicators that leverage this kind of bounding technique (the AO and MACD at time of publication) and instead use the Trandingview built-in versions!
We know this is a lot to read and digest, so please take your time and feel free to ask questions - we will do our best to answer! And as always, constructive feedback is always welcome!
Bar MagnifierMany times while developing algos based on patterns and reversals, I come across issues which needs lower timeframe inspection. Loading multiple charts and comparing equivalent lower timeframe is slightly cumbersome at times. Hence, I thought of building this simple tool - which will instantly provide me lower timeframe candles for given candle. Since the candle selection happen via confirmed time input, we can use this as slider to move from one candle to other for inspection.
🎲 Usage
🎯Loading the script
When you load the script, a prompt appears which asks you to select a time by clicking on the chart.
Select the bar you want to magnify and study
🎯Components
Once loaded, you can see the marker which tells which bar is magnified. And you can also see all the lower timeframe candles before that point. Please note that due to pine restrictions, we can only show last 250 lower timeframe bars. You can change the lower timeframe via settings to cover if the chart timeframe is very high.
🎯Moving to different bars
Click on the middle of the marker, you will see slider which you can slide to move from one bar to other.
Example, after sliding, you will see the lower timeframe data of new candle.
🎯Settings
Settings has only two inputs.
Bar time - selects the bar which needs to be inspected.
Lower timeframe - Default is 1 min. And select a timeframe according to your chart timeframe. Less than 1 min is not supported by security.lower_tf function. Hence, will not work.