Based on the data that we have collected so far, we are able to measure significant displacement for some users. What is that displacement? When you remote view your target for a trial, not only do you tend to perceive aspects of the correct target more often than chance would permit (and this is precisely your job as a time traveler), but you also tend to perceive aspects of ALL 9 TARGETS for the other 9 trials in your prediction. It's almost as if your remote viewing lens isn't perfectly focused on the exact feedback event for the trial that you are intending because your intention may be slightly "displaced" over the short amount of time that you view all 10 feedback targets for your prediction.

Some of you don't tend to displace at all, and some of you displace about 50% of the time, where 50% of your score is from displacement and the other 50% is from your intended targets. Some of you score much higher when we consider your displacement trials, and some of you score much higher if we consider only your intended trials. The 50-50 split is the default method of analyzing your predictions in the Time-Machine app. If I optimize the % split between intended and displacement for each user specifically, then run a 'backtest' using this unique profile for each user, we measure significantly better performance as you would expect, but much of that performance is most probably over-fitted (over-optimized). The question is, is this individualized "profile" consistent for each user, and can it be used going forward to analyze your performance?

To help answer that question, I can run a "walk-forward" optimization test. This is where I optimize the % split between intended and displacement for each user individually over the 1st half of their data. Then I can "walk-forward" that optimal profile to the second half of their data (the "unseen" portion of the data with no look-ahead bias), and measure the performance. If this walk-forward performance using the users profile performs better than the default 50-50 split method, then using profiles could be a valid method of analysis going forward.

Here are the results of this walk forward test:

**Performance of 2nd half of users data using the default 50-50 split between intended and displacement:**

# Trials = 2036

Trials % win rate = 52%

Trials z-score = 1.51

# Predictions = 167

Predictions % win rate = 53.8%

Predictions z-score = 1.01

**Performance of 2nd half of users data using optimized split between intended and displacement (profiles):**

# Trials = 2036

Trials % win rate = 52%

Trials z-score = 1.55

# Predictions = 171

Predictions % win rate = 51.4%

Predictions z-score = .38

**CONCLUSION**

Since the results are so similar, it's difficult to say which method is better. Using custom profiles produces a slightly, but insignificantly higher trials z-score (z=1.55 vs. z= 1.51), but using a 50-50 split produces a better but also insignificant prediction z-score (z=1.01 vs z=.38). I would have to say that using the 50-50 default split might be the better analysis method going forward simply because there is no question about over-fitting, and more importantly, it's a simpler method. Following is the 50-50 split method based on all data from all users.

**Performance of all users data using default 50-50 split between intended and displacement:**

# Trials = 3978

Trials % win rate = 52%

Trials z-score = **2.73**

# Predictions = 304

Predictions % win rate = **57%**

Predictions z-score = **2.41**

An over-all trial z-score of **2.73** is 2.73 standard deviations from chance expectation!! WELL DONE EVERYONE! This is absolutely phenomenal!

The probability of obtaining a result as extreme as or more extreme than a z-score of 2.73 by chance alone in guessing future outcomes of random, invisible futures events is approximately 0.3167%. This translates to **odds against chance of approximately 1 in 315.6.** This means that such an outcome is highly unlikely to occur by random chance, indicating that the guesses (predictions) are not simply random. 1 in 315 means that in order for our results to be considered possibly 'chance', we would have to repeat the entire experiment consisting of nearly 4000 trials spanning over a few months, 315 times in order for one of those 4000 trial experiments to produce an outcome as high as z=2.73. That would take over 75 years!

## Comentários