Skip to content
Hero Background Light

Understanding Prediction Confidence Intervals in Stock Forecasting

Understanding Prediction Confidence Intervals in Stock Forecasting

Most stock prediction services give you a single number: “AAPL will hit $200.” But that number alone is nearly useless. Will it hit $200 ± $5 or $200 ± $50? The difference between those scenarios is everything.

Confidence intervals transform predictions from false precision into actionable probability estimates. Understanding them is essential for anyone using forecasts in their investment process.

The Problem with Point Estimates

A point estimate is a single predicted value. It looks precise, which is exactly the problem.

What Point Estimates Hide

What You SeeWhat You Don’t Know
AAPL → $200How confident is the model?
NVDA → $500What’s the range of likely outcomes?
TSLA → $300Is this a high or low conviction forecast?

Every prediction has uncertainty. Models that don’t show you that uncertainty are hiding information—whether intentionally or not.

The False Precision Trap

Consider two scenarios:

Scenario A: Model predicts AAPL → $200. Actual outcome: $198. Scenario B: Model predicts AAPL → $200. Actual outcome: $150.

Both started with the same point estimate. But if Scenario A had a confidence interval of $195-$205 and Scenario B had $150-$250, the outcomes tell very different stories:

  • Scenario A: Prediction was precise and accurate
  • Scenario B: Prediction was imprecise, outcome within expected range

Without the interval, you can’t distinguish between a model that’s reliably accurate and one that’s occasionally lucky.

What is a Confidence Interval?

A confidence interval is a range of values that likely contains the true outcome, based on the model’s uncertainty.

Components

ComponentDescriptionExample
Point estimateCentral prediction (mean)$200
Lower boundPessimistic scenario$190
Upper boundOptimistic scenario$210
Confidence levelProbability range contains true value70%

When a model says “$200 with a 70% confidence interval of $190-$210,” it means:

  • The best estimate is $200
  • There’s roughly a 70% probability the actual price falls between $190 and $210
  • There’s a 30% probability it falls outside that range

Interpreting Width

The width of the interval tells you about model confidence:

Interval WidthInterpretation
Narrow ($200 ± $5)High confidence, low uncertainty
Wide ($200 ± $30)Low confidence, high uncertainty
Varies by tickerSome assets are more predictable
Widens over timeUncertainty compounds into the future

A narrow interval on a volatile stock should make you suspicious. A wide interval on a stable stock might indicate model uncertainty rather than true volatility.

Why Confidence Intervals Matter for Trading

1. Position Sizing

The width of a confidence interval should inform how much capital you allocate:

def calculate_position_size(prediction, portfolio_value, risk_per_trade=0.02):
"""
Size positions based on prediction confidence.
Wider intervals = smaller positions.
"""
point_estimate = prediction['main']
lower_bound = prediction['lower']
upper_bound = prediction['upper']
# Calculate expected move and uncertainty
expected_return = (point_estimate - prediction['current_price']) / prediction['current_price']
uncertainty = (upper_bound - lower_bound) / point_estimate
# Confidence-adjusted position sizing
# Higher uncertainty = smaller position
confidence_multiplier = 1 / (1 + uncertainty)
max_position = portfolio_value * risk_per_trade
adjusted_position = max_position * confidence_multiplier
return adjusted_position

This approach takes smaller positions when the model is less certain—matching bet size to conviction.

2. Risk/Reward Assessment

Confidence intervals let you assess asymmetry:

Current PricePredictionLower BoundUpper BoundAssessment
$100$120$95$145Asymmetric upside (risk $5, reward $45)
$100$120$80$140Symmetric (risk $20, reward $40)
$100$120$70$130Unfavorable (risk $30, reward $30)

The point estimate is identical in all three cases. The interval changes the trade from attractive to unattractive.

3. Stop Loss Placement

The lower bound suggests where the model’s thesis breaks down:

def calculate_stop_loss(prediction, buffer=0.02):
"""
Set stop loss based on prediction confidence interval.
If price goes below lower bound, thesis is invalid.
"""
lower_bound = prediction['lower']
# Add buffer below lower bound
stop_price = lower_bound * (1 - buffer)
return stop_price

If a stock falls below the prediction’s lower bound, the model’s thesis may be wrong—a signal to reassess.

4. Comparing Opportunities

When ranking trade ideas, intervals help prioritize:

TickerExpected ReturnInterval WidthConfidence-Adjusted
AAPL+10%±5%Strong (tight interval)
NVDA+15%±20%Moderate (wide interval)
TSLA+20%±40%Weak (very wide interval)

TSLA has the highest point estimate but the lowest conviction. AAPL might be the better trade despite lower headline return.

Reading FinBrain’s Prediction Format

FinBrain provides full prediction cones—not just a single estimate, but forecasts for multiple time periods, each with its own confidence interval.

Important: Predictions are forward-looking only. The API provides forecasts from the current date forward (10 days for daily, 12 months for monthly). Historical predictions are not available—you cannot retrieve what the model predicted last month for today. This is by design: predictions are generated fresh each day based on the latest data.

Two Prediction Horizons

TypeCoverageUse Case
Daily predictions10 days forward (Day 1 through Day 10)Short-term trading, entry/exit timing
Monthly predictions12 months forward (Month 1 through Month 12)Position building, trend analysis

Each day or month in the forecast has its own mean, lower bound, and upper bound. This creates a prediction “cone” that typically widens over time as uncertainty compounds.

Response Structure (Daily Example)

{
"ticker": "AAPL",
"prediction": {
"2024-11-04": "201.33,197.21,205.45",
"2024-11-05": "202.77,196.92,208.61",
"2024-11-06": "203.99,196.90,211.08",
"2024-11-07": "204.85,196.45,213.25",
"2024-11-08": "205.92,195.88,215.96",
"2024-11-11": "207.15,195.12,219.18",
"2024-11-12": "208.44,194.25,222.63",
"2024-11-13": "209.80,193.30,226.30",
"2024-11-14": "211.22,192.28,230.16",
"2024-11-15": "212.70,191.20,234.20",
"expectedShort": "0.22",
"expectedMid": "0.58",
"expectedLong": "0.25",
"type": "daily",
"lastUpdate": "2024-11-01T23:24:18.371Z"
}
}

Notice how the interval widens over the 10-day horizon:

  • Day 1: $201.33 ± $4.12 (tight)
  • Day 10: $212.70 ± $21.50 (wider)

This reflects the reality that near-term predictions are more reliable than far-out forecasts.

Breaking Down the Format

Each date’s prediction contains three comma-separated values:

PositionMeaningSDK ColumnExample
FirstPredicted price (point estimate)main201.33
SecondLower bound (pessimistic)lower197.21
ThirdUpper bound (optimistic)upper205.45

So "201.33,197.21,205.45" means:

  • Best estimate: $201.33
  • Downside: $197.21
  • Upside: $205.45
  • Range: $8.24 (~4% of price)

Monthly Predictions

Monthly predictions follow the same format but span 12 months:

{
"ticker": "AAPL",
"prediction": {
"2024-12": "208.50,192.30,224.70",
"2025-01": "215.20,188.50,241.90",
"2025-02": "222.10,184.20,260.00",
...
"2025-11": "285.40,158.60,412.20",
"type": "monthly"
}
}

Monthly intervals are naturally wider than daily intervals—12 months of uncertainty is greater than 10 days.

Movement Probabilities

The expectedShort, expectedMid, and expectedLong fields add another dimension:

FieldMeaningExample Interpretation
expectedShortProbability of short-term move22% chance of near-term move
expectedMidProbability of medium-term move58% chance of medium move
expectedLongProbability of long-term move25% chance of larger move

These probabilities help you understand not just where price might go, but the expected magnitude of movement.

Working with Prediction Intervals in Python

The simplest approach uses the SDK’s built-in DataFrame conversion:

from finbrain import FinBrainClient
fb = FinBrainClient(api_key="YOUR_API_KEY")
# Get predictions as DataFrame (daily is default)
df = fb.predictions.ticker("AAPL", as_dataframe=True)
print(df)
# main lower upper
# date
# 2024-11-04 201.33 197.21 205.45
# 2024-11-05 202.77 196.92 208.61
# 2024-11-06 203.99 196.90 211.08
# ... ... ... ...
# 2024-11-15 204.05 190.93 217.18
# For built-in visualization
fb.plot.predictions("AAPL") # prediction_type="monthly" for monthly

The SDK returns columns named main (predicted price), lower (lower bound), and upper (upper bound).

Parsing Raw JSON (Alternative)

If you need the raw response or want custom parsing:

from finbrain import FinBrainClient
import pandas as pd
fb = FinBrainClient(api_key="YOUR_API_KEY")
# Get raw JSON response (daily is default)
data = fb.predictions.ticker("AAPL")
# Parse the comma-separated format: "predicted,low,high"
def parse_prediction(pred_string):
parts = pred_string.split(',')
return {
'main': float(parts[0]),
'lower': float(parts[1]),
'upper': float(parts[2])
}
# Extract predictions for each date
predictions = {}
for date, value in data['prediction'].items():
if ',' in str(value): # Skip non-prediction fields like expectedShort
predictions[date] = parse_prediction(value)
# Convert to DataFrame
df = pd.DataFrame(predictions).T
df.index = pd.to_datetime(df.index)
df = df.sort_index()
print(df)

Calculating Interval Metrics

def analyze_prediction_confidence(df, current_price):
"""
Analyze prediction confidence from interval data.
"""
latest = df.iloc[-1] # Furthest prediction
# Calculate metrics
expected_return = (latest['main'] - current_price) / current_price
interval_width = (latest['upper'] - latest['lower']) / latest['main']
upside = (latest['upper'] - current_price) / current_price
downside = (current_price - latest['lower']) / current_price
# Risk/reward ratio
risk_reward = upside / downside if downside > 0 else float('inf')
# Confidence score (narrower interval = higher confidence)
confidence_score = 1 / (1 + interval_width)
return {
'expected_return': f"{expected_return:.2%}",
'interval_width': f"{interval_width:.2%}",
'upside_potential': f"{upside:.2%}",
'downside_risk': f"{downside:.2%}",
'risk_reward_ratio': f"{risk_reward:.2f}",
'confidence_score': f"{confidence_score:.2f}"
}
# Example usage
current_price = 198.50
analysis = analyze_prediction_confidence(df, current_price)
print(analysis)
# {
# 'expected_return': '2.76%',
# 'interval_width': '6.97%',
# 'upside_potential': '6.32%',
# 'downside_risk': '0.80%',
# 'risk_reward_ratio': '7.90',
# 'confidence_score': '0.93'
# }

Visualizing the Prediction Cone

The full 10-day or 12-month forecast creates a widening cone—tight near-term, wider long-term:

import matplotlib.pyplot as plt
import pandas as pd
def plot_prediction_cone(predictions, current_price, ticker, horizon='daily'):
"""
Plot full prediction cone showing all 10 days or 12 months.
"""
# Parse all predictions
data = []
for date, value in predictions['prediction'].items():
if ',' in str(value):
mean, low, high = map(float, value.split(','))
data.append({'date': date, 'mean': mean, 'low': low, 'high': high})
df = pd.DataFrame(data)
df['date'] = pd.to_datetime(df['date'])
df = df.sort_values('date')
fig, ax = plt.subplots(figsize=(12, 6))
# Plot widening confidence cone
ax.fill_between(df['date'], df['lower'], df['upper'],
alpha=0.3, color='blue', label='Confidence Interval')
# Plot mean prediction line
ax.plot(df['date'], df['main'], 'b-', linewidth=2,
marker='o', markersize=4, label='Mean Prediction')
# Current price reference
ax.axhline(y=current_price, color='gray', linestyle='--',
label=f'Current Price (${current_price})')
# Annotate interval width expansion
first_width = df.iloc[0]['upper'] - df.iloc[0]['lower']
last_width = df.iloc[-1]['upper'] - df.iloc[-1]['lower']
ax.annotate(f'±${first_width/2:.1f}', xy=(df.iloc[0]['date'], df.iloc[0]['upper']),
fontsize=9, color='blue')
ax.annotate(f'±${last_width/2:.1f}', xy=(df.iloc[-1]['date'], df.iloc[-1]['upper']),
fontsize=9, color='blue')
horizon_label = '10-Day' if horizon == 'daily' else '12-Month'
ax.set_xlabel('Date')
ax.set_ylabel('Price ($)')
ax.set_title(f'{ticker} {horizon_label} Prediction Cone')
ax.legend()
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
# Plot both horizons
daily_pred = fb.predictions.ticker("AAPL") # daily is default
plot_prediction_cone(daily_pred, current_price, 'AAPL', horizon='daily')
monthly_pred = fb.predictions.ticker("AAPL", prediction_type="monthly")
plot_prediction_cone(monthly_pred, current_price, 'AAPL', horizon='monthly')
# Or use the SDK's built-in plotting
fb.plot.predictions("AAPL") # daily
fb.plot.predictions("AAPL", prediction_type="monthly") # monthly

The cone visualization makes the uncertainty structure immediately clear—you can see exactly how confidence degrades over the forecast horizon.

Interval Width Across Time Horizons

Prediction uncertainty compounds over time. This is visible in FinBrain’s data—watch how intervals widen:

Daily Predictions (10-Day Horizon)

DayTypical Interval WidthUse Case
Day 1-2±2-4%Very short-term timing
Day 3-5±4-7%Swing trade entries
Day 6-10±7-12%Weekly trend direction

Monthly Predictions (12-Month Horizon)

MonthTypical Interval WidthUse Case
Month 1-3±8-15%Quarter-ahead outlook
Month 4-6±15-25%Half-year positioning
Month 7-12±25-40%+Long-term trend

Comparing Daily vs Monthly

from finbrain import FinBrainClient
fb = FinBrainClient(api_key="YOUR_API_KEY")
# Get both prediction types
daily = fb.predictions.ticker("AAPL") # daily is default
monthly = fb.predictions.ticker("AAPL", prediction_type="monthly")
# Parse and compare interval widths
def get_interval_width(pred_string):
mean, low, high = map(float, pred_string.split(','))
return (high - low) / mean
# Daily: 10 forecasts, tighter intervals
daily_widths = [get_interval_width(v) for k, v in daily['prediction'].items()
if ',' in str(v)]
print(f"Daily interval range: {min(daily_widths):.1%} to {max(daily_widths):.1%}")
# Monthly: 12 forecasts, wider intervals
monthly_widths = [get_interval_width(v) for k, v in monthly['prediction'].items()
if ',' in str(v)]
print(f"Monthly interval range: {min(monthly_widths):.1%} to {max(monthly_widths):.1%}")

The interval expansion over time isn’t a weakness—it’s honesty. A model claiming the same precision for 12-month forecasts as 1-day forecasts is either lying or overconfident.

Common Mistakes with Confidence Intervals

1. Treating Bounds as Guarantees

The interval is probabilistic, not deterministic:

WrongRight
”Stock won’t go below $190""There’s ~15% chance it goes below $190"
"Guaranteed upside to $210""Upside target with ~70% probability”

Extreme events happen. The interval captures likely scenarios, not all scenarios.

2. Ignoring Interval Width

Focusing only on point estimates misses crucial information:

# Bad: Only considering point estimate
predictions = [
{'ticker': 'A', 'mean': 110, 'low': 105, 'high': 115}, # Tight
{'ticker': 'B', 'mean': 110, 'low': 80, 'high': 140}, # Wide
]
# Good: Considering interval width
for p in predictions:
width = (p['upper'] - p['lower']) / p['main']
print(f"{p['ticker']}: {p['main']} ± {width:.1%}")
# A: 110 ± 9.1%
# B: 110 ± 54.5% <-- Much less confident

3. Assuming Static Intervals

Intervals change as new information arrives:

  • Earnings announcement → Interval may widen (uncertainty) or narrow (resolution)
  • Volatility spike → Intervals typically widen
  • Extended calm → Intervals may narrow

Check updated predictions regularly.

4. Comparing Different Confidence Levels

A 90% interval is wider than a 70% interval for the same prediction. When comparing across sources, ensure you’re comparing equivalent confidence levels.

Confidence Intervals vs. Other Uncertainty Measures

Volatility

Historical volatility measures past price swings. Confidence intervals measure forward-looking prediction uncertainty.

MeasureBackward/ForwardWhat It Captures
Historical volatilityBackwardPast price variation
Implied volatilityForwardMarket’s expected volatility
Prediction intervalForwardModel’s uncertainty about price level

Standard Error

Standard error measures uncertainty in the mean estimate. Prediction intervals include both:

  • Uncertainty in the mean (where’s the center?)
  • Inherent variability (how much noise around the center?)

Prediction intervals are wider than standard errors because they account for both sources.

Integrating Intervals into Your Process

For Discretionary Traders

  1. Screen for tight intervals — Focus on high-conviction predictions
  2. Check asymmetry — Is upside/downside ratio favorable?
  3. Size by confidence — Smaller positions for wider intervals
  4. Set stops at lower bound — Define where thesis breaks

For Systematic Traders

def generate_signals(predictions, current_prices, min_confidence=0.8):
"""
Generate trading signals based on prediction intervals.
"""
signals = []
for ticker, pred in predictions.items():
current = current_prices[ticker]
# Calculate confidence score
interval_width = (pred['upper'] - pred['lower']) / pred['main']
confidence = 1 / (1 + interval_width)
# Skip low-confidence predictions
if confidence < min_confidence:
continue
# Calculate expected return
expected_return = (pred['main'] - current) / current
# Check if lower bound is above current price (high conviction long)
if pred['lower'] > current * 1.02:
signals.append({
'ticker': ticker,
'signal': 'STRONG_BUY',
'confidence': confidence,
'expected_return': expected_return
})
elif pred['main'] > current * 1.05:
signals.append({
'ticker': ticker,
'signal': 'BUY',
'confidence': confidence,
'expected_return': expected_return
})
return sorted(signals, key=lambda x: x['confidence'], reverse=True)

Key Takeaways

  1. Point estimates without intervals hide crucial information about model confidence
  2. Interval width tells you how confident the model is—narrow = high confidence, wide = low confidence
  3. Use intervals for position sizing: wider intervals warrant smaller positions
  4. Risk/reward calculations require intervals, not just point estimates
  5. Prediction uncertainty compounds over time—near-term forecasts are more precise
  6. The lower bound suggests where your thesis breaks down—useful for stop losses
  7. Always compare opportunities using confidence-adjusted metrics, not just expected returns

A prediction without a confidence interval is like a weather forecast without a probability of rain. The number might be right, but you have no idea how much to trust it. Demand uncertainty quantification from any forecast you use.