1. What are the currently active dominant cycles?
2. How long is the active cycle?
3. When are past and future highs / lows?

[![image-1589439314523.png](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/image-1589439314523.png)](https://docs.cycle.tools/uploads/images/gallery/2020-05/image-1589439314523.png)
Figure 2: VIX Cycle, Dominant Cycle Length: 180 bars, Source: [https://cycle.tools](https://cycle.tools/) (13. Feb. 2020)
The reading of the VIX sentiment cycles is somewhat different when applied to stock market behavior: Data lows show windows of high confidence in the market and low fear of market participants, which in most cases refer to market highs of stocks and indices. On the other hand, data highs represent a state of high anxiety, which occurs in extreme forms at market lows in particular.
**Reading the cycle in this way, one would predict a market high that will happen in the current period at the end of December 2019/beginning of 2020 and an expected market low that, according to the VIX cycles, could occur somewhere around April.**
Similar to the identification and forecasting of weather/temperature cycles, we can now identify and predict sentiment cycles.
In terms of trading, one should never follow a purely static cycle forecast. The cycle-in-cycles approach should be used to cross-validate different related markets for the underlying active dominant cycles. If these related markets have cyclical synchronicity, the probability for successful trading strategies increases.
##### Global stock markets
Figure 3 now shows the same method applied to the S&P500 stock market index. The underlying detected cycle has a length of 173 bars and indicates a cycle high at the current time. This predicts a downward trend of the dominant cycle until mid 2020.

[![image-1589439296676.png](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/image-1589439296676.png)](https://docs.cycle.tools/uploads/images/gallery/2020-05/image-1589439296676.png)
Figure 3: SP500, Dominant Cycle Length: 173 bars, Source: [https://cycle.tools](https://cycle.tools/) API / NT8 (13. Feb. 2020)
**We have now discovered two linked cycles, a sentiment cycle with a length of about 180 bars, which indicates a low stress level in December 2019 with an indicative rising anxiety level. And a dominant S&P500 cycle, which leads a market with an expected downtrend until summer 2020. Both cycles are synchronous and parallel in length, timing and direction. This is the key information of a cycle analysis: Synchronous cycles in different data sets that could indicate a trend reversal for the market under investigation.**
Well, we can go one step further now. Since more and more dominant cycles are active, you should also look at the 2-3 dominant cycles in a composite cycle diagram.
##### A composite cycle forecast
Figure 4 shows this idea when analyzing the active cycles for the Amazon stock price. See the list on the right for the current active cycles identified. The most interesting ones have been marked with the length of 169 bars and 70 bars. It is possible to select the most important ones based on the cycle Strength information and the Bartels Score. Not going into the details for these two mathematical parameters, let us simply select the two highest ones for the example. Now, instead of simply drawing the dominant cycle into the future, instead we use both selected dominant cycles for the overlay composite cycle display (purple) which is also extended in the unknown future.

By combining different data sets and the analysis of the dominant cycle, we can detect a cyclical synchronicity between different markets and their dominant cycles.

The mathematical parameters of cycles allow us to project a kind of “window into the future”. With a projection of the next expected main turning points of the cycle or composite cycle. This information is valuable when it comes to trading and trading techniques. Especially when you are able to identify similar dominant cycles and composite cycles in related markets which are “in-sync”. The examples used have been kept simple and fairly static to show the basic use for cycle detection and prediction. The projections obtained must be updated with each new data point. It is therefore essential to not only perform this analysis once, statically, but to update it with each new data point. Knowing how to use cyclical analysis should be part of any serious trading approach and can increase the probability of successful strategies. Because if a rhythmic oscillation is fairly regular and lasts for a sufficiently long time, it cannot be the result of chance. And the more predictable it becomes. There is often a lack of simple, user-friendly applications to put this theory into practice. We have to work on spreading this knowledge and its application. Instead of the scientific-mathematical deepening of algorithms. This theory can be applied to any change on our earth as well as to any change of human beings in order to understand their nature and predictable behavior. ##### Background Information ***How does the shown approach work used in these examples?*** The technique applied is based on a digital processing algorithm that does all the hard work and maths to derive the dominant cycle in a way that is useful for the non-technical user.More information on the cycle scanner framework used for these examples can be found in[ this chapter](https://docs.cycle.tools/books/cycle-analysis/chapter/cycle-scanner-framework "Cycle Scanner Framework").

# Cycle Parameters Explained The following chart summarizes all relevant parameters related to a "perfect" sinewave cyle: [![PerfectCycle.jpg](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/PerfectCycle.jpg)](https://docs.cycle.tools/uploads/images/gallery/2020-05/PerfectCycle.jpg) ##### ##### **What is Frequency?** Frequency is the number of times a specified event occurs within a specified time interval. Example:5 cycles in 1 second= 5 Hz 1 cycle in 16 days = 0.0625 cycles/day = 723 nHz - - - - - - ##### ##### **What is Strength?** Strength is the relative amplitude of a given cycle per time interval. (“amplitude per bar”). Example: A = 213 , d = 16, s = 13.2 per dRead more on Cycle Strength in how to "Rank" cycles [here](https://docs.cycle.tools/books/cycle-analysis/page/ranking "Ranking").

- - - - - - ##### ##### **What is Bartels Score?** The Bartels score provides a direct measure of the likelihood that a given cycle is genuine and not random. It measures the stability of the amplitude and phase of each cycle. Formula: B score %= (1-Bartels Value)\*100 Range: 0 % : cycle influenced by random events, not significant 100 %: cycle is significant / genuineRead more on how to validate cycles with the Bartels score [here](https://docs.cycle.tools/books/cycle-analysis/page/cycle-validation "Cycle Validation")

# Dynamic Nature of Cycles #### Cycles are not static Dominant Cycles morph over time because of the nature of inner parameters of length and phase. Active Dominant Cycles do not abruptly jump from one length (e.g., 50) to another (e.g., 120). Typically, one dominant cycle will remain active for a longer period and vary around the core parameters. The “genes” of the cycle in terms of length, phase, and amplitude are not fixed and will morph around the dominant mean parameters.The assumption that cycles are static over time is misleading for forecasting and cycle prediction purposes.

These periodic motions abound both in nature and the man-made world. Examples include a heartbeat or the cyclic movements of planets. Although many real motions are intrinsically repeated, few are perfectly periodic. For example, a walker's stride frequency may vary, and a heart may beat slower or faster. Once an individual is in a dominant state (such as sitting to write a book), the heartbeat cycle will stabilize at an approximate rate of 85 bpm. However, the exact cycle will not stay static at 85 bpm but will vary +/- 10%. The variance is not considered a new heartbeat cycle at 87 bpm or 83 bpm, but is considered the same dominant, active vibration. This pattern can be observed in the environment in addition to mathematical equations. Real cyclic motions are not perfectly even; the period varies slightly from one cycle to the next because of changing physical environmental factors. Steve Puetz, a well known cycle researcher, calles this “*Period variability*“: > “Period variability – Many natural cycles exhibit considerable variation between repetitions. For instance, the sunspot cycle has an average period of ∼10.75-yr. However, over the past 300 years, individual cycles varied from 9-yr to 14-yr. Many other natural cycles exhibit similar variation around mean periods.” *Puetz (2014): in Chaos, Solitons & Fractals* This dynamic behavior is also valid for most data-series which are based on real-world cycles.However, anticipating current values for length and cycle offset in real time is crucial to identifying the next turn. It requires an awareness of the active dominant cycle parameter and requires the ability to verify and track the real current status and dynamic variations that facilitate projection of the next significant event. Figures 1 to 3 provide a step-by-step illustration of these effects. The illustrations show a grey static cycle. The variation dynamic in the cycle is represented by the red one with parameters that morph slightly over time. The marked points A to D represent the deviation between the ideal static and the dynamic cycle. #### Effect A: Shifts in Cycle Length The first effect is contraction and extraction of cycles, or the “cycle breath.” Possible cycles are detected from the available data on the left side of the chart. Points A and B show an acceptable fit between both cycles. However, the red dynamic cycle has a greater parameter length. The past data reveal that this is not significant, and there is a good fit for the theoretical static and the dynamic cycle at point A and B. Unfortunately, the future projection area on the right side of the chart where trading takes place reflects an increasing deviation between the static and dynamic cycle. The difference between the static and dynamic cycle at points C and D is now relatively high. [![Cycle Length Shifts](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/Cycle_Length_Phase_Shifts.png)](https://docs.cycle.tools/uploads/images/gallery/2020-05/Cycle_Length_Phase_Shifts.png) The real “dynamic” cycle has a parameter with a slightly greater length. The consequence is that future deviations increase even when the deviations between the theoretical and real cycle are not visible in the area of analysis. These differences are crucial for trading. As trading occurs on the right side of the chart, the core parameters now and for the next expected cycle turn must be detected. A perfect fit of past data or a two-year projection is not a concern. The priority is the here and now, not a mathematical fit with the past. Current market turns must be in sync with the dynamic cycle to detect the next turn. Therefore, just as an individual heartbeat cycle approximates a core number, the cycle length will vary around the dominant parameter +/- 5%. Following only the theoretical static cycle will not provide information concerning the next anticipated turning points. However, this is not the only effect. **Animated Video - Length Shifts:** - - - - - - #### Effect B: Shifts in Cycle Phase The next effect is “offset shifts.” In this case, the cycle length parameter is the same between the static theoretical and the dynamic cycle. The dynamic cycle at point A presents a slight offset shift at the top. In mathematical terms, the phase parameter has morphed. This effect remains fixed into the future. A static deviation is observed between the highs and the lows. [![Cycle Phase Shift ](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/Cycle_Length_Phase_Shifts_a.png)](https://docs.cycle.tools/uploads/images/gallery/2020-05/Cycle_Length_Phase_Shifts_a.png) Although this is not a one-time effect, the phase of the dominant cycle will also change continuously by +/- 5% around the core dominant parameters. **Animated Video - Phase Shifts:** - - - - - - #### The Combined Effects In practice, both effects occur in parallel and change continuously around the core dominant parameters. Figure 3 presents a snapshot of both effects with the theoretical and the dynamic cycle. The deviation in the projection area at points C and D shows that just following the static theoretical cycle will rapidly become worthless. [![Cycle Combined Shift](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/Cycle_Combined_Shifts.png)](https://docs.cycle.tools/uploads/images/gallery/2020-05/Cycle_Combined_Shifts.png) The deviation is to the extent that, at point D, a cycle high is expected for the theoretical static cycle (grey) while the real dynamic cycle (red) remains low at point D. These two effects occur in a continuous manner. Although the alignment in the past (points A and B) appear acceptable between the static and dynamic cycle, the deviation in the projection area (points C and D) is so high that trading the static cycle will lead to failure. **Animated Video - Combined Effects:** **** A cycle forecasting example incorporating these effects explains the consequences on the right side of the chart. We check the following two examples named “A” and “B”. The price chart is the same for both examples and is represented by a black line on the chart. In both examples, a dominant cycle is detected (red cycle plot) and the price is plotted. In both examples, two variations of the same dominant cycle are detected. The tops and lows show alignment with the price data and two cycle tops and two cycle lows align. This implies that the same dominant cycle is active in both charts. There is one core dominant cycle and the two detected cycles are variations of this same dominant cycle. Therefore, from an analytical perspective view, both cycles could be considered valid from observations of the available dataset. [![Detected Cycles](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/A.png)](https://docs.cycle.tools/uploads/images/gallery/2020-05/A.png) The effects reveal that although past data deviation is convincing, it can significantly impact the projection area. We examine the projection of both cycles. [![Cycle Projection Example](https://docs.cycle.tools/uploads/images/gallery/2020-05/scaled-1680-/AB2.png)](https://docs.cycle.tools/uploads/images/gallery/2020-05/AB2.png) We observe two contrasting projections. Example A shows a bottoming cycle with a projected upturn to a future top. Example B shows the opposite, a topping out cycle with an expected future downturn. While we can detect a dominant cycle on the left area of the chart, the detailed dynamic parameters are the significant differentiators and are crucial to a valid and credible projection. Classic static cycle projections often fail for this reason. Detecting the active dominant cycle represents one part of the process. The second part is to consider the current dynamic parameters with respect to the length and phase of the second part. Although the perfect fit of a cycle within the distant past between price and a static cycle might appear convincing from a mathematical perspective, it is misleading because it ignores the dynamic cycle components. Doing so simplifies the math, but is of no value for trading on the right side of the chart. The examination of past perfect fit static cycles is not necessary. The observance of two to five significant correlations of tops and lows, AND the consideration of current dynamic component updates will yield valid trading cycle projections. This example underpins the significance of an approach that combines a dominant cycle detection engine with a dynamic component update. These two effects occur in a continuous manner. Although the alignment in the past (points A and B) appear acceptable between the static and dynamic cycle, the deviation in the projection area (points C and D) is so high that trading the static cycle will lead to failure: - - - - - - #### Video Lesson – Dynamic Cycles Explained The following video illustrates the two effects in action (6min.) # Asymmetric Business Cycles and Skew Factors**Preface:** Cycle analysis and cycle forecasting often imply the use of a symmetric time distribution between high to low and low to high. This is the underlying framework used by anyone applying mathematical signal processing to cycles and producing cycle-based composite cycle forecasts. This technique is now faced with a new challenge that has emerged over the past 30 years based on financial regulations impacting today’s economic business cycle. The following article will highlight the situation and present the reader with a proposed skew factor to account for this behavior in cycle forecasting models.

Business cycles are a fundamental concept in macroeconomics. The economy has been characterized by an increasingly negative cyclical asymmetry over the last three decades. Studies show that recessions have become relatively more severe, while recoveries have become smoother, as recently highlighted by Fatas and Mihov. Finally, recessive episodes have become less frequent, suggesting longer expansions.As a result, booms are increasingly smoother and longer-lasting than recessions.

These characteristics have led to an increasingly negative distortion of the business cycle in recent decades. Extensive literature has examined in detail the statistical properties of this empirical regularity and confirmed that the extent of contractions tends to be sharper and faster than that of expansions. In a paper published in the American Economic Journal on Jan. 2020, Jensen et al. summarized: > Booms become progressively smoother and more prolonged than busts. Finally, in line with recent empirical evidence, financially driven expansions lead to deeper contractions, as compared with equally sized nonfinancial expansions. When recessions become faster and more severe and recoveries softer and longer, standard symmetric cycle models are doomed to fail. This new pattern challenges the existing standard, symmetrical, 2-phase cycle models. Since 2-phase cycle models are based on a time-symmetric distribution of dominant cycles with mathematical sine-based counting modes from low to low or high to high. However, these models lose their forecasting ability under the assumption that a uniform distribution from high to low and low to high is no longer given. A new model is needed. A dynamic skew cycle model that includes a skew factor. Before introducing a new mathematical model to account for the asymmetric behavior, the cycle difference will be visualized and compared with some diagrams. The following illustration shows a classical, symmetrical 2-phase cycle on the left (green) and an asymmetric 3-phase cycle is highlighted on the right (red). #### Asymmetric Cycle Model This following model shown in Chart 1 uses a simplified formula that allows different distortions of the phases with a skew factor, but also keeps the length of the whole cycle, from peak to peak, the same without distortion. [![image-1596148605464.png](https://docs.cycle.tools/uploads/images/gallery/2020-07/scaled-1680-/image-1596148605464.png)](https://docs.cycle.tools/uploads/images/gallery/2020-07/image-1596148605464.png) *Chart 1: Comparing 2-phase symmetric (green) and 3-phase asymmetric cycle models (red)* The new “skew factor” used in the red model shows that the upswing phase is twice as long as the recession, while ensuring the same total duration and amplitude of the standard, 2-phase cycle model (green, left). This allows us to model identified cycle lengths and strengths in the 3-phase model (red, right). So, if we add the “skew factor” to the traditional mathematical cycle algorithms, we get cycle models that consider the asymmetric changes mentioned above. And thus, the cycle models can be used again for forecasts. #### Example: The skew factor on the S&P 500 index The next chart 2 shows a detected dominant, symmetric cycle with a length of 175 bars in January 2020 for the S&P 500 index. The light blue price data were not known to the cycle detection algorithm and represent the forecast out-of-sample range. The cycle is shown as a pink overlay. This symmetrical cycle forecast predicts that the peak would occur as early as the end of 2019, and a new low for this cycle to occur in May. [![image-1596523992482.png](https://docs.cycle.tools/uploads/images/gallery/2020-08/scaled-1680-/image-1596523992482.png)](https://docs.cycle.tools/uploads/images/gallery/2020-08/image-1596523992482.png) *Chart 2: S&P 500 with 175 day symmetric cycle, skew factor: 0.0, date of analysis: 16. Jan 2020* As can be seen, the predicted high was too early than the real market top, and the predicted low was too late compared to the market low. This is a common observation when using symmetric cycle models in today’s markets. On the one hand, the analyst can now anticipate, based on knowledge of asymmetric variation, that the predicted high will be too early and the plotted low too late. However, additional knowledge of the analyst is required without being represented in the model. A better approach would be to include this knowledge already in the modeling of the cycle projection. Therefore, we now add the skew factor to the detected cycle analysis approach. In the next graph (Chart 3), a skew factor to the same 175-day cycle is applied. The date of analysis is still January 16, and the light blue is the prediction out-of-sample period. Here the asymmetric cycle forecast projects the peak for late January and the low for March 2020. The real price followed this asymmetric cycle projection more accurately. [![image-1596523937171.png](https://docs.cycle.tools/uploads/images/gallery/2020-08/scaled-1680-/image-1596523937171.png)](https://docs.cycle.tools/uploads/images/gallery/2020-08/image-1596523937171.png) *Chart 3: S&P 500 with 175 day asymmetric cycle, skew factor: 0.4, date of analysis: 16. Jan 2020* This example demonstrates the importance to adapt traditional cycle prediction models with the addition of a skew factor. The introduction of a skew factor is based on the current scientific knowledge of the changed, asymmetric business cycle behavior. The next paragraph explains how this asymmetry can be applied to existing, mathematical cycle models by introducing the skew factor formula. #### Cycle Skew The skew factor allows the representation of an asymmetric shape for business cycles in a cyclic model, as shown in the following examples. The green cycle is a standard sine-wave cycle (s[![image-1596093375577.png](https://docs.cycle.tools/uploads/images/gallery/2020-07/scaled-1680-/image-1596093375577.png)](https://docs.cycle.tools/uploads/images/gallery/2020-07/image-1596093375577.png) | [![image-1596093433444.png](https://docs.cycle.tools/uploads/images/gallery/2020-07/scaled-1680-/image-1596093433444.png)](https://docs.cycle.tools/uploads/images/gallery/2020-07/image-1596093433444.png) |

s_{kew} = 0.5
| s_{kew} = 0.75 |

[![image-1596093514687.png](https://docs.cycle.tools/uploads/images/gallery/2020-07/scaled-1680-/image-1596093514687.png)](https://docs.cycle.tools/uploads/images/gallery/2020-07/image-1596093514687.png) | [![image-1596093542766.png](https://docs.cycle.tools/uploads/images/gallery/2020-07/scaled-1680-/image-1596093542766.png)](https://docs.cycle.tools/uploads/images/gallery/2020-07/image-1596093542766.png) |

s_{kew} = -0.5
| s_{kew} = -0.75 |

Desmos interactive playbook: [https://www.desmos.com/calculator/ejq06faf93](https://www.desmos.com/calculator/ejq06faf93 "Desmos Skew Playbook")

#### Math & Code ##### Equation To apply the cycle skew, the skewed sine wave equation is introduced instead of a pure sine wave, sin(x), formula:![](https://docs.cycle.tools/uploads/images/drawio/2020-08/Drawing-6-1596616979.png)

Where:
sThis article was published in the [CYCLES MAGAZINE](https://journal.cycles.org/Issues/Vol48-No2-2021/index.html?page=80), Jan. 2021. The Official Journal of the Foundation for the Study of Cycles. Vol. 48 No2 2021. Page 80ff. (Source Link: [https://journal.cycles.org/Issues/Vol48-No2-2021/index.html?page=80](https://journal.cycles.org/Issues/Vol48-No2-2021/index.html?page=80) )

The Bartels test returns a value that gives the measure of the likelihood of genuineness of a cycle: values range from 0 up to 1, and the lower the value, the less likely is that this cycle is due to chance, or random.

The test considers both the consistency and the persistence of a given cycle within the data set it is applied to. To make it more human readable as we are looking for an easily readable indication if the cycle is genuine, we just convert the raw Bartels value into a percentage that indicates how likely the cycle is genuine by using the conversion formula:**Cycle Score Genuine % = (1 - Bartels Score) \* 100**

It gives us a value between 0% (random) and 100% (genuine). This test helps us now to filter out possible cycles that might have been detected in the cycle detection step (Step 2), but had only been in the data series for a short or random period and should therefore not be considered as dominant cycles in the underlying original data series. As we have a final percentage score, we just need to define an individual threshold below which the cycles should be skipped. We recommend using a threshold of >49% and hence cycles with a Bartels genuine percentage value below 49% should be skipped by any cycle forecasting or analysis techniques that follow. Further Reference: [\[1\]](#_ftnref1) C. E. Armstrong, Cycles Magazine, October 1973, p. 231ff, "Part 25: Testing Cycles for Statistical Significance" (see pdf attachment) # Ranking An important final step in making sense of the cyclic information is to establish a measurement for the strength of a cycle. Once ranking and sorting for detected cycles is completed, we have cycles that are dominant (based on their amplitude) and genuine (considering their driving force in the financial market). For trading purposes, this does not suffice. The price influence of a cycle per bar on the trading chart is the most crucial information. Let me give you some examples by comparing two cycles. One cycle has a wavelength of 110 bars and an amplitude of 300. The other cycle has a wavelength of 60 bars and an amplitude of only 200. So, if we apply the “standard” method for determining the dominant cycle, namely selecting the cycle with the highest amplitude, we would select the cycle with the wavelength of 110 and the amplitude of 300. But let us look at the following information - the force of the cycle per bar: - Length 110 / Amplitude 300 = Strength per bar: 300 / 110 = 2.7 - Length 60 / Amplitude 200 = Strength per bar: 200 / 60 = 3.3 For trading, it is more important to know which cycle has the biggest influence to drive the price per bar, and not only which cycle has the highest amplitude! That is the reason I am introducing the measurement value “**Cycle Strength**.” The Cycle Scanner automatically calculates this value. That said, to build a ranking based on the cycles left, we recommend sorting these cycles based on their "influence" per bar. As we are looking for the most dominant cycles, these are the cycles that influence the movement of the data-series the most per single bar. Sort the outcome according to the calculated cycle strength score. Now we have a top-to-bottom list of cycles having the highest influence on price movements per bar. #### What is the dominant cycle? After the cycle scanner engine has completed all steps (detrend, detect, validate, rank), the cycle at the top of the list (with the highest cycle strength score) will provide us the information on the ***dominant cycle***. In fact, the wavelength of this cycle is the dominant market vibration, which is very useful for cycle prediction and forecasting. However, not only is the result limited to the cycle length (we not only have the dominant cycle length) but we also know—and this is very important—the current phase status of this cycle (Important: not the averaged phase over the full data set). This allows us to provide more valid cycle projections on the "right" side of the chart for trading instead of using the normally used "averaged" phase status over the full data set for this cycle. # Spectral Averaging #### Cycles are not static, so what? Most cycle analysts have seen the moment-to-moment fluctuations, often referred to as shifts in the length and phase of a cycle, which are common in continuous measurements of a supposedly steady spectrum of financial records. But wait, we should know that cycles are not static in real life. We can see these variations in size and phase for each cycle length in the updated spectrogram after a new data point/bar is added to the data set. Although small variations are certainly no cause for concern, we should remember that we do not live in an ideal, noise-free world, but we are always looking for ways to reduce the phase shifts and variations in dominant cycles. Therefore, instead of additional smoothing of the input data, we will apply averaging to the received cycle spectrum. The basic idea of averaging to reduce spectral noise is the same as averaging - or smoothing - the input signal. However, touching the raw input signal with the averaging reduces data accuracy and results in a delay of the signal. This is not what we need, especially because the end of the data set - the current time - is of the utmost importance when working with cycles for financial time series data. Therefore, averaging the input and adding a delay to our series of interest is a bad idea for cycle analysis in data sets where the most recent current data points are more important than data points from long ago. #### Dont smooth the input - average the spectrum! Averaging a spectrum can reduce fluctuations in cycle measurements, making it an important part of spectrum measurements without changing the input signal or adding delay. Spectral averaging offers a different approach and is a kind of ensemble averaging, meaning that the "sample" and "average" are both cycle spectra. The "average" spectrum is obtained by averaging the "sample" spectra. However, due to the nature of the spectra, this is not as simple as this. If you apply a discrete Fourier transform routine to a set of real-world samples to find a spectrum, the output is a set of complex numbers representing the magnitude and phase of that spectrum. Calculating an average spectrum involves averaging over common frequencies in several spectra.**Spectral averaging eliminates the effect of phase noise**.

The size of the spectrum is independent of time shifts in the input signal, but the phase can change with each data set. By averaging the power spectra and taking the square root of the result, we eliminate the effect of phase variation. **Reducing the noise variance helps us to distinguish small real cycles from the largest noisy peaks.** The result of spectral averaging is an estimate of the spectrum containing the same amount of energy as the source. Although the noise energy is preserved, the variance (noise fluctuation) in the spectrum is reduced. This reduction can help us reduce the detection of "false cycles" that are the result of single, one-time "noise" peaks. Lets illustrate the concept of spectral averaging with a simple example. This method is a derivative of the so-called Bartlett method, if no other window than a rectangular window is applied to the data sections. #### Cycle detection example We use a constructed, simplified data-set which consists of 2 cycles, noise and different trends. The two cycles have a length of 45 and 80 days: [![image-1597074936323.png](https://docs.cycle.tools/uploads/images/gallery/2020-08/scaled-1680-/image-1597074936323.png)](https://docs.cycle.tools/uploads/images/gallery/2020-08/image-1597074936323.png) Raw input signal consisting of cycles, noise and trends Let us have a look at the different spectrogram results *without* (1) and *with* (2) spectral averaging. The first diagram shows the basic amplitude spectrum of the signal without spectral averaging. [![image-1597075174397.png](https://docs.cycle.tools/uploads/images/gallery/2020-08/scaled-1680-/image-1597075174397.png)](https://docs.cycle.tools/uploads/images/gallery/2020-08/image-1597075174397.png) Chart 1: Cycle spectrum without spectrum averaging It clearly shows the cycles with a amplitude peak at 45 and 80 days, but you can identify lower peaks in the spectrum that have nothing in common with real cycles of the original dataset. These smaller amplitude peaks are "false cycle" - noise only. We want to avoid using these cycles in cycle forecasting models. Since it is important to identify and separate peaks based on noise - we will see if the method of averaging in the spectrum offers some added value. The next diagram uses the same input signal, but runs not just once to generate the spectrum, but several spectra to form the spectral average. In our case, the window only changes the beginning of the series and always uses the same end point for each spectrum. This ensures that we have overlapping windows, always using the last available closer/bar and changing only the beginning of the series. [![image-1597075260299.png](https://docs.cycle.tools/uploads/images/gallery/2020-08/scaled-1680-/image-1597075260299.png)](https://docs.cycle.tools/uploads/images/gallery/2020-08/image-1597075260299.png) Chart 2: Cycle spectrum with spectrum averaging The result illustrates that the peak values for 45 and 80 days are still present. But now the "noise" level has become much lower, and due to the averaging of the spectral window analysis, we have a lower number of "false cycle peaks". This will help us distinguish between important and unimportant cycles for future cycle prediction modeling techniques. References: - (2008) "The Spectral Analysis of Random Signals", in: Digital Signal Processing. Signals and Communication Technology. Springer, London. https://doi.org/10.1007/978-1-84800-119-0\_7 - Oppenheim and Schafer (2009): "Discrete-Time Signal Processing", Chapter 10, Prentice Hall, 3rd edition. - Bartlett (1950): "PERIODOGRAM ANALYSIS AND CONTINUOUS SPECTRA", Biometrika, Volume 37, Issue 1-2, June 1950, Pages 1–16, https://doi.org/10.1093/biomet/37.1-2.1 - Wikipedia: "Bartlett Method", [https://en.wikipedia.org/wiki/Bartlett%27s\_method](https://en.wikipedia.org/wiki/Bartlett%27s_method) - Shearman (2018): "Take Control of Noise with Spectral Averaging", https://www.dsprelated.com/showarticle/1159.php # Endpoint flattening The digital signal processing (e.g. Discrete Fourier Fransform) assumes that the time domain dataset is periodic and repeats. Suppose a price series starts at 3200 and toggles and wobbles for 800 data samples and ends at the value 2400. The DFT assumes that the price series starts at zero, suddenly jumps to 3200, goes to 2400, and suddenly jumps back to zero and then repeats. The DFT has to create all sorts of different frequencies in the frequency domain to try to achieve this kind of behavior. These false frequencies, generated to match the jumps and the high average price, mask the amplitudes of the true frequencies and make them look like noise.Fortunately, this effect can be nearly eliminated by a simple technique called **endpoint flattening**.

##### Example The following chart shows an example data series (green) and the de-trended data at the bottom panel (gold) without endpoint flattening: [![image-1626255996599.png](https://docs.cycle.tools/uploads/images/gallery/2021-07/scaled-1680-/image-1626255996599.png)](https://docs.cycle.tools/uploads/images/gallery/2021-07/image-1626255996599.png) The next example shows the same data series now with endpoint flattening applied to the detrended series: [![image-1626256103968.png](https://docs.cycle.tools/uploads/images/gallery/2021-07/scaled-1680-/image-1626256103968.png)](https://docs.cycle.tools/uploads/images/gallery/2021-07/image-1626256103968.png) The difference is only visible at the beginning and the end on both de-trended series. While the first one starts below 0 and ends well above 0, the second chart shows that the de-trended series starts and ends at zero. ##### Math formula Calculating the coefficients for endpoint flattening is simple: Taking n closing prices. If x(1) represents the first price in the sampled data series, x(n) represents the last point in the data series, and x