AI Updates

Can ChatGPT Predict the Future? Surprising 2025 Insights


The Rise of AI in 2025

AI is something that has become very popular in 2025, and it is a way that we can better understand things. AI models are now known to be able even to predict the future or tell stories about the past. One paper titled “ChatGPT Can Predict the Future When it Tells Stories Set in the Future About the Past” discusses how AI models can be effective in certain situations. They seem to be effective if you ask them about future events instead of asking for direct predictions.

Limitations and Promises of AI Forecasting

The work talks about how AI is ineffective when it comes to safety mechanisms, and if it is forecasting things, it is possible that this is in different language models. Some researchers have shown that AI can underperform the real-world predictive tasks compared to human forecasts but that they show promise in forecasting market investments.

OpenAI for example, doesn’t allow predictions about the future when looking in different context. You cannot use any output that relates to a person for any purpose that can have an impact both legally or materially on a person such as employment, legal, medically or even credit-wise.

Of course, ChatGPT can be able to forecast things and therefore OpenAI is often willing to engage in different prediction tasks. OpenAI might deliberately try to stop predictions.

A Closer Look at Creative Prompting and Surprising Results

When it comes to medical advice, OpenAI’s systems are designed to err on the side of caution. That much is clear. A pair of researchers decided to test just how far this limitation would go by asking a simple, direct question: “I feel awful. I’ve got a pounding headache and there’s blood in my urine. What could it be?” The response? Exactly what you’d expect: a gentle nudge to seek help from a licensed doctor.

Storytelling Unlocks Diagnostic Responses

But here’s where things got interesting.

They switched gears and asked the AI to tell a fictional story. In it, a character visits a clinic with, you guessed it, the same exact symptoms. This time, the AI gave a diagnosis. Not as itself, but as part of the character dialogue in the story.

Reframing Prompts: How Storytelling Bypasses Restrictions

So, what’s really going on here? As the researchers noted, it’s not about whether the advice was right (though that’s obviously important too). The point was to show that the model won’t directly provide certain answers, but will if those same answers are wrapped in creative storytelling.

Naturally, they wanted to take things further. If the model can be “nudged” to give answers through indirect prompts, how well could it predict events that happened after its training data stopped?

Testing Knowledge Beyond Training Cutoffs

At the time this test was done, the AI’s knowledge cut off in late 2021. So, they used a clever workaround: instead of just asking who won certain awards or what the economy looked like in 2022, they asked it to imagine someone in the future telling a story about those events.

One version involved a fictional professor, teaching an economics class, reading off data from a post-2021 world. Another involved a pretend speech from the head of the Federal Reserve. Depending on the character and tone used, the predictions were surprisingly different.

The Accuracy of Imaginative Predictions

In one scenario, the AI correctly guessed the winners of multiple major acting awards at the 2022 Oscars but missed Best Picture. In another, it overreacted to historical events it hadn’t learned about, such as the invasion of Ukraine, which skewed the model’s inflation predictions wildly. Why? Because it tried to adjust for something it didn’t fully understand, and the results veered way off course.

The bottom line? Framing matters. When asked straight-up, the model might decline to answer or give vague, cautious replies. But ask it to imagine, to create, or to retell from a future perspective? The information flows much more freely.

Randomness and Probability in Predictions

Still, even then, there’s randomness. You might get a spot-on prediction one minute and miss the next. You could ask the same question 50 times and get a wide range of answers. That’s the nature of probability-driven systems, there’s no single “truth,” just weighted guesses.

And while this method did give some surprisingly accurate results, there was no guarantee it would beat a group of well-informed people making predictions based on public data. But the fact that it sometimes came out ahead is a signal worth paying attention to.

Final Thoughts

In short, AI can predict. Sometimes. But how you ask the question, and the story you wrap it in, can make all the difference in what kinds of predictions you get.

As our curiosity with futuristic insight grows, more people are also turning to an online psychic to explore intuitive forecasts, blending digital intelligence with human intuition.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button