WOW!! I started this ambitious project on Oct 3rd, 2023, and just a little over 8 months later, we have amassed nearly 50,000 trials (48,479 to be exact). To review, our goal with the __Time-Machine.com__ project is to:

Prove statistically that it is possible to retrieve 1 bit of information from the future.

Prove by application, that it is possible to profit from that information. "Application" meaning to "apply" the process risking capital on the prediction and earning a profit after costs.

The future event that we are attempting to predict is the price direction of a futures contract over a 1 hour period of time. The futures contract symbol for each prediction is chosen randomly from a basket of 7 different symbols and hidden from all participants. The prediction is either "YES" or "NO" with the question being: "will the price for futures contract x go up in price between x:00 and (x:00 + 1 hour)?" If the prediction is NO, then we can assume the price will stay the same or go down and in this case, our action will be to sell (take a short position) the symbol. If our prediction is YES, then we can assume the price will go up and we will buy (take a long position) the symbol.

**HAVE WE ACCOMPLISHED GOAL # 1 TO PROVE STATISTICALLY THAT WE CAN RETRIEVE 1 BIT OF INFORMATION FROM THE FUTURE.** YES, after 48,479 trials, my most robust and basic test still shows statistical significance at **z = 1.88**, with pockets of much greater significance.

**TRIAL RESULTS**

**Robust Test**

**48,479 trials, 50.42% correct predicted trials, z= 1.85**

This is the most honest, basic and non-over-fitted test we can perform, counting every single 48,479 trials from every single user since the very start, no exceptions, no filters.

**Noise Filter**

**32,605 trials, 50.76% correct predicted trials, z= 2.72**

When we remove all trials with confidence scores less than z=0.3 the z score increases from 1.85 to 2.72. Since our A.I. trial analysis process provides us with a confidence score which measures how close the remote viewing description came to the target photo BEFORE the event, we can use this to filter out low scoring trials which could be considered random noise. I have been filtering out low scoring confidence scores since my __original ARV experiment__ published 11 years ago. It is still a valid method of increasing effect size.

**Beginner Effect**

**28,389 trials, 50.77% correct predicted trials, z= 2.59**

Other experimenters have reported higher performance from subjects who had never tried remote viewing before. This effect is also known as "beginners luck", or "first timer effect". I have also been watching this subset since the start of the project. Each prediction consists of 10 nested trials which users have to complete within 20 to 30 minutes. We compute a consensus of all 10 trial scores to make one prediction. For this Beginner effect group, we are counting only trials 0 to 100 (predictions 1 to 10) for each user in this group. Any trial over 100 is considered "experienced", and is NOT included. We see an increase in the z score from 1.85 to 2.59 when we include only beginners.

**Beginner Effect + Filter Noise**

**19,066 trials, 52.25% correct predicted trials, z= 3.43**

Combining the z=0.3 low confidence noise filter with beginners effect results in a highly significant z = 3.4 from almost 20,000 trials.

**Fatigue filter**

**19,453 trials, 50.95% correct predicted trials, z= 2.6**

Since all users must complete 10 trials for one prediction in a single sitting, we noticed that the early trials out-performed the later trials in a session, probably due to fatigue. When we measure only trials 2,3,4 & 5 (leaving 1 out as a "warm-up" trial... I know, this looks a little over fitted), we improve success rate to z = 2.6

**Fatigue filter + Beginner effect + Filter Noise**

**7690 trials, 52.74% correct predicted trials, z= 4.8**

Considering only trials 2,3,4 & 5 (fatigue filter), only predictions 1 to 10 (trials 1 to 100) (beginner effect), and using a confidence score filter of .3 (noise filter) results in a combined trial win rate of 52.74% and a very significant z score of **z = 4.8**

**TRADE RESULTS**

Trade results are determined by risking capital on the consensus of all 10 trials in a prediction by actually trading the futures symbol. For example, if 7 out of 10 trials for a particular prediction were predicting the outcome "yes", then the trade bias would be UP (long).

Theoretically, if the trial win rate is over 50%, then the trade win rate should also be over 50% and the trades should be overall net profitable. However, in reality, we have costs such as brokerage commissions, exchange fees and slippage to cover. These costs usually amount to around 2 to 3 % of the trade profit. Therefore, unfortunately, with a trial win rate of only 52.7%, we would expect to break even, or lose money due to that 2 to 3% cost hurdle.

**TRADE RESULTS with Fatigue filter + Beginner effect + Noise Filter**

**5639 trials, 52.6% correct predicted trials, z= 3.88 **(a different date range than the trial results above for this configuration was used to make the 1 hour trade holding period consistent). Results as shown below confirm that we cannot earn a profit from a 52.6% win rate due to not being able to recover trading costs.

**Trade results with costs:**

**no of Trades** = 1613

**% win trades** = 50.65%

**Total profit = ****-$10,730**

**Trade results without costs:**

**no of Trades** = 1613

**% win trades** = 53.32%

**Total profit = ****$31,464**

**ENVIRONMENTAL FILTERS**

There are additional trial trial filters aside from filtering low confidence scores, fatigued trials 6 to 10, and using only beginners. These other filters are environmental, and could include factors such as time of day, local sidereal time, positions of planets, etc. I'm not going to reveal exactly what these environmental filters are yet because I don't have conclusive evidence that I am not simply over fitting the data.

**TRIAL RESULTS with special environmental filters (no other trial filters are applied)**

**4093 trials, 52.19% correct predicted trials, z= 2.78. **This includes every trial from every user with no confidence score filtering, no beginner effect filter, and no fatigue filter. The only filter used is the special environmental filter (an example of an environmental filter might be to only consider trials conducted between 9:00 am and 10:00 am). We can see that the application of this external filter increases the trial win rate to 52.19% and the resulting trade win rate to 56.21% thereby overcoming the trading costs and enabling us to earn a net overall profit.

**Trade results with costs:**

**# of Trades** = 306

**% win trades** = 56.21%

**z score** = 2.17

**Total profit = ****$45,127**

**Trade Magnitude correlation z score** = 2.76

**TRADE MAGNITUDE CORRELATION**

If we run a Monte Carlo simulation where we are trading any random symbol held for any random one hour period while forcing exactly 56.21% winning trades, then the average profit earned is only about 1/3 of the $45,127 that we actually earned. This "over profit" amount is statistically very significant at **z = 2.76**. So the question is, how did we earn so much profit with a mere 3% edge over our cost hurdle? (remember we have a 2 to 3% trading cost to cover).

It appears that the magnitude of the event outcome seems to be correlated to the prediction confidence. In other words, when we achieve a "yes" prediction, the actual outcome is more likely to be large in magnitude than just an average sized move. It would seem that future event outcomes that are large in magnitude are "more predictable" than average magnitude event outcomes.

Yet another way to explain trade magnitude correlation is if we can achieve only 53% trade win rate, this is NOT enough to overcome the costs of the trade (commissions, fees, slippage). However, if we are more likely to predict a large magnitude move, then a low win rate like 53% might be enough if our average profitable trade is much larger than our average losing trade. This seems to be the case. Here is an example: Using the environmental filter, I set the trial filters so that I was getting a 52% trial win rate, and the corresponding trade win rate turned out to be 53.35% which, as we know, is not enough to cover our trading costs and earn an overall profit. However, the actual trades were very profitable because the winning trades were much larger point moves than the losing trades. A Monte Carlo test resulted in a Z score of nearly **z = 3.0 **which means that our actual profitable result wasn't a chance occurance, as it was 3 standard deviations from what we could expect from a 53.35% win rate with random winning trades.

**TRADE MAGNITUDE CORRELATION ONLY POSSIBLE USING EXTERNAL FILTERING**

It appears that if we do not use EXTERNAL environmental filters, the trade magnitude correlation disappears. For example, If I set the trial filters such that we can achieve the same 52% trial win rate as in the above paragraph, but WITHOUT any environmental filtering, the resulting trade win rate is still around 53 to 54%, (same as above paragraph example) yet the resulting equity curve is more typical of a 53 to 54% trade win rate which is very poor because we are not overcoming costs and we measure NO trade magnitude correlation at all. In other words, we are getting a 53 to 54% average selection of winning trades, NOT 53 to 54% of the largest winning trades as we were getting when using the environmental filtering.

To double check the above finding, I applied the environmental filtering again, but tried to get a very poor trial win rate by adjusting the trial filters to see if we can still measure a trade magnitude correlation with a poor trade win rate. The trial win rate for my test was 50.8% and resulting trade win rate was a very poor 46.86%, yet the montecarlo trade magnitude correlation was STILL SIGNIFICANT AT Z= 1.9 !. What this means is that if we use our special environmental filter, and the very WORSE trial filtering possible (use only low confidence scoring trials, use the fatigued trials 6,7,8,9 and 10, and filter out beginners by using users who have already contributed over 100 trials (typically poor performers), then we get what we expect with the trades - a very poor 48% trade win rate and losing money - however, we lose LESS money than what could be expected with a random selection of 48% winning trades.

**OPPOSITE ENVIRONMENTAL FILTER = OPPOSITE RESULTS**

**THIS IS FASCINATING: **It seems that when we REVERSE the special environmental filter, we also REVERSE our success rate from significant winning to significant losing. A Monte Carlo test confirmed that the losing trades are significant z = -1.7, so this "losing streak" isn't a chance occurance. An example of a special environmental filter reversal would be to switch from 12:00 noon to 12:00 midnight, or the spring equinox to the fall equinox (for example).

**The plot below is trade profits from using the special environmental filter (no other filters are used). **

Special environmental filter trades Z score **z = 2.12**

**The plot below is trade profits from using the reverse of the special environmental filter (no other filters are used). **

Reversed special environmental filter trades Z score **z = -2.71**

## Comentários