Predictu
Users

User Scoring Engine

The Predictu scoring engine computes a composite score for every user who has placed at least one resolved trade. The score drives tier progression, risk engine decisions, and operator-facing analytics. Scores range from 0 to 100 and are recalculated daily in a batch cron job.

When does scoring run? A daily cron job fires at 03:00 UTC and re-scores every user who had at least one trade resolve in the previous 24 hours, plus a full re-score of all active users every Sunday at 04:00 UTC.

Overview

Each user receives a single numerical score that captures how skilled, disciplined, and profitable they are relative to the house. The scoring engine exists so operators can automatically identify sharp bettors, protect margins, and reward recreational players with better limits. Scores are not exposed to end users; they are visible only in the God Mode and Operator dashboards.

The composite score is a weighted sum of five independent metrics. Each metric is first normalized to a 0–100 scale, then multiplied by its weight. The final score is clamped to [0, 100].

composite = clamp(
  win_rate_score   * 0.30
+ edge_score       * 0.25
+ timing_score     * 0.15
+ sizing_score     * 0.15
+ diversity_score  * 0.15
, 0, 100)

The Five Metrics

1. Win Rate (30% weight)

Win rate is the most heavily weighted metric because a consistently winning bettor is the clearest signal of sharp activity. It measures the ratio of resolved trades that settled in the user’s favor to total resolved trades.

win_rate = resolved_wins / resolved_total
win_rate_score = win_rate * 100   // 50% win rate = 50 pts, 80% = 80 pts
Win RateRaw ScoreWeighted Contribution
30%309.0
50%5015.0
65%6519.5
80%8024.0
95%9528.5

Only resolved trades count. Open positions are excluded. A minimum of 5 resolved trades is required before the win rate metric becomes active; until then it defaults to 50 (neutral).

2. Edge Captured (25% weight)

Edge captured measures how much theoretical value a user extracts relative to fair market prices. It answers the question: “Is this user consistently finding mispriced markets?”

// For each winning trade:
edge_per_trade = payout - cost

// Aggregate:
total_edge = sum(edge_per_trade for all resolved trades)
total_risk = sum(cost for all resolved trades)
edge_pct   = total_edge / total_risk

// Normalize to 0-100:
// -50% edge = 0 pts, 0% edge = 50 pts, +50% edge = 100 pts
edge_score = clamp((edge_pct + 0.5) * 100, 0, 100)
Edge %Raw ScoreWeighted Contribution
-30%205.0
0%5012.5
+20%7017.5
+40%9022.5
+50% or more10025.0

The edge metric is particularly important for detecting users who might be using external models or information advantages. A user with a modest win rate but high edge is typically buying large positions on underpriced outcomes.

3. Timing (15% weight)

Timing rewards users who enter positions early when prices are far from implied fair value. The intuition is that sharp bettors buy YES at low prices (before the market moves up) or buy NO when YES is expensive (before the market corrects). Specifically:

  • Good YES timing: Bought YES at a price below 60 cents
  • Good NO timing: Bought NO at a price above 40 cents (equivalently, YES was above 60 cents)
// For each resolved winning trade:
is_well_timed = (side === 'YES' && entry_price < 0.60)
             || (side === 'NO'  && entry_price > 0.40)

well_timed_count = count(is_well_timed for wins)
timing_ratio     = well_timed_count / resolved_wins

// Scale:
timing_score = timing_ratio * 100
Timing RatioRaw ScoreWeighted Contribution
0% (never well-timed)00.0
25%253.75
50%507.5
75%7511.25
100% (always well-timed)10015.0
Edge case: If a user has zero resolved wins, the timing score defaults to 0. This is intentional -a losing user with good timing is still a losing user.

4. Sizing Discipline (15% weight)

Sizing discipline compares the average trade size on winning trades to the average trade size on losing trades. A sharp bettor sizes up when they have conviction (winners) and keeps losers small. This is a hallmark of professional trading.

avg_win_size  = mean(trade_amount for resolved_wins)
avg_loss_size = mean(trade_amount for resolved_losses)

// Ratio: >1 means bigger winners than losers (sharp behavior)
if (avg_loss_size === 0) {
  sizing_ratio = 3.0  // all wins, max out
} else {
  sizing_ratio = avg_win_size / avg_loss_size
}

// Normalize: ratio of 0.5 = 25 pts, 1.0 = 50 pts, 2.0 = 100 pts
sizing_score = clamp(sizing_ratio * 50, 0, 100)
Win/Loss Size RatioInterpretationRaw ScoreWeighted
0.3xWinners are much smaller than losers152.25
0.5xWinners are half the size of losers253.75
1.0xEqual sizing507.5
1.5xWinners 50% larger7511.25
2.0x+Winners double losers (professional)10015.0

This metric has a minimum trade count requirement of 3 resolved trades before it activates. Below that threshold, sizing defaults to 50 (neutral).

5. Diversification (15% weight)

Diversification measures how many distinct markets a user has traded across. A user who only bets on one market is likely recreational; a user spread across many markets is either an algorithm or a sophisticated bettor.

unique_markets = count(distinct market_id for all trades)

// Scoring curve (lookup table with interpolation):
// 1 market   = 10 pts
// 2 markets  = 25 pts
// 3 markets  = 45 pts
// 4 markets  = 65 pts
// 5 markets  = 80 pts
// 8 markets  = 90 pts
// 12+ markets = 100 pts

diversity_score = interpolate(unique_markets, DIVERSITY_CURVE)
Unique MarketsRaw ScoreWeighted Contribution
1101.5
2253.75
3456.75
4659.75
58012.0
89013.5
12+10015.0

The curve is deliberately steep early on -going from 1 to 3 markets quadruples the score. This reflects the reality that most recreational users stick to 1–2 markets while professional bettors systematically scan for edges across the full catalog.

Composite Score Example

Let’s walk through a concrete example for a user named “Alice” who has 30 resolved trades:

MetricRaw ValueRaw ScoreWeightContribution
Win Rate70%7030%21.0
Edge Captured+18%6825%17.0
Timing60% well-timed6015%9.0
Sizing1.4x ratio7015%10.5
Diversification6 markets8415%12.6
composite = 21.0 + 17.0 + 9.0 + 10.5 + 12.6
         = 70.1

classification = "sharp"  // falls in the 70-85 range

Alice scores 70.1, placing her in the sharp classification. She is not yet auto-restricted because she hasn’t crossed the 90-point threshold, but she’ll appear in the risk dashboard with a yellow flag.

Classification Thresholds

Once the composite score is computed, users are bucketed into one of four classifications. These classifications drive UI badges in the admin dashboards and feed into automated risk rules.

ClassificationScore RangeDescriptionRisk Action
Recreational0 – 39Casual bettor, house-favorable. Low win rate, poor timing, small edge.None. Full limits apply per tier.
Moderate40 – 69Above average but not consistently profitable. May have streaks.Monitored. Appears in weekly risk summary.
Sharp70 – 84Consistently profitable. Good timing and sizing. Possible model user.Flagged. Yellow badge in admin. Included in daily risk email.
Professional85 – 100Extremely skilled. Likely using quantitative models or inside info.Red badge. Immediate review recommended. Auto-restriction possible.
Transitions: Classification changes are logged in the audit trail. When a user crosses from “moderate” to “sharp” or from “sharp” to “professional”, a risk event is created and the operator is notified via callback (if configured).

Auto-Restriction

The scoring engine can automatically restrict users who meet both of the following conditions:

  1. Composite score ≥ 90
  2. At least 20 resolved trades (to prevent false positives from small samples)

When auto-restriction triggers, the following happens:

  • The user’s tier is changed to restricted
  • Their per-trade limit drops to $5
  • A +3% spread is added to all their quotes
  • A risk event of type AUTO_RESTRICT is created
  • The operator receives an S2S callback of type USER_RESTRICTED
  • An entry is added to the audit log with the score breakdown
// Auto-restriction logic (simplified)
if (user.composite_score >= 90 && user.resolved_trade_count >= 20) {
  await setUserTier(user.id, 'restricted');
  await createRiskEvent({
    type: 'AUTO_RESTRICT',
    user_id: user.id,
    details: {
      score: user.composite_score,
      classification: 'professional',
      resolved_trades: user.resolved_trade_count,
      metrics: user.score_breakdown
    }
  });
  await notifyOperator(user.operator_id, 'USER_RESTRICTED', { user_id: user.id });
}
Irreversibility: Auto-restriction is not automatically reversed if the score drops below 90. A God Mode admin must manually change the tier back. This is by design -once a user is identified as professional, lowering their score through losses doesn’t mean they’ve become less skilled.

Batch Scoring Cron Job

Scoring runs as a server-side cron job, not in real-time. This keeps trade execution fast and avoids latency spikes during high-volume periods.

Daily Incremental Scoring

Runs at 03:00 UTC every day. It selects all users who had at least one trade resolve in the past 24 hours, recomputes their scores, and updates the user_scores table.

// Cron: 0 3 * * *  (daily at 03:00 UTC)
// Job: score_users_daily

1. SELECT DISTINCT user_id FROM trades
   WHERE resolved_at > NOW() - INTERVAL '24 hours'

2. For each user_id:
   a. Fetch all resolved trades
   b. Compute 5 metrics
   c. Compute composite score
   d. Determine classification
   e. UPSERT into user_scores table
   f. If classification changed → create risk_event
   g. If auto-restriction triggered → update tier + notify

Weekly Full Re-Score

Runs every Sunday at 04:00 UTC. This is a complete re-score of every user with at least one resolved trade, regardless of recent activity. It catches edge cases where historical trade data was corrected or markets were re-settled.

// Cron: 0 4 * * 0  (Sundays at 04:00 UTC)
// Job: score_users_weekly_full

1. SELECT DISTINCT user_id FROM trades
   WHERE resolved_at IS NOT NULL

2. Full re-score pipeline (same as daily, but all users)
3. Generate weekly scoring summary report
4. Email summary to God Mode admins

Data Model

Scores are stored in the user_scores table, which is updated by the cron jobs and read by the admin dashboards and risk engine.

CREATE TABLE user_scores (
  user_id         UUID PRIMARY KEY REFERENCES users(id),
  composite_score NUMERIC(5,2) NOT NULL DEFAULT 0,
  classification  TEXT NOT NULL DEFAULT 'recreational',
  win_rate_score  NUMERIC(5,2) NOT NULL DEFAULT 50,
  edge_score      NUMERIC(5,2) NOT NULL DEFAULT 50,
  timing_score    NUMERIC(5,2) NOT NULL DEFAULT 0,
  sizing_score    NUMERIC(5,2) NOT NULL DEFAULT 50,
  diversity_score NUMERIC(5,2) NOT NULL DEFAULT 10,
  resolved_trades INTEGER NOT NULL DEFAULT 0,
  scored_at       TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  created_at      TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
ColumnTypeDescription
composite_scoreNUMERIC(5,2)Final weighted score, 0–100
classificationTEXTOne of: recreational, moderate, sharp, professional
win_rate_scoreNUMERIC(5,2)Raw win rate metric, 0–100
edge_scoreNUMERIC(5,2)Raw edge captured metric, 0–100
timing_scoreNUMERIC(5,2)Raw timing metric, 0–100
sizing_scoreNUMERIC(5,2)Raw sizing discipline metric, 0–100
diversity_scoreNUMERIC(5,2)Raw diversification metric, 0–100
resolved_tradesINTEGERTotal resolved trades at time of scoring
scored_atTIMESTAMPTZLast time this user was scored

Score History & Trends

Every time a user’s score changes by more than 5 points, a snapshot is written to theuser_score_history table. This allows God Mode admins to view score trends over time and identify users who are rapidly improving (potential model deployment).

CREATE TABLE user_score_history (
  id              UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id         UUID NOT NULL REFERENCES users(id),
  composite_score NUMERIC(5,2) NOT NULL,
  classification  TEXT NOT NULL,
  delta           NUMERIC(5,2) NOT NULL, -- change from previous
  snapshot        JSONB NOT NULL,         -- full metric breakdown
  created_at      TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

The God Mode dashboard renders a sparkline chart from this data, showing the user’s score trajectory over the last 90 days. A rapidly rising score (e.g., +20 points in a week) triggers aSCORE_SURGE risk event.

Operator & Admin Overrides

God Mode admins can override any aspect of the scoring system:

  • Manual classification: Force a user into any classification regardless of score
  • Score freeze: Lock a user’s score so cron jobs skip them
  • Tier override: Set any tier independently of the scoring engine
  • Auto-restriction toggle: Disable auto-restriction for a specific user (e.g., a known VIP whale the operator wants to keep)

All overrides are logged in the audit trail with the admin’s user ID and a required reason field.

Interaction with Risk Engine

The scoring engine feeds directly into several risk engine subsystems:

Risk FeatureHow Scoring Integrates
Per-trade limitsTier (influenced by score) sets the max trade amount
Spread adjustmentsRestricted tier adds +3% spread; custom spreads can be set per classification
Circuit breakersWhen a sharp/professional user places a large trade, the circuit breaker threshold is lowered
Exposure capsProfessional users have their individual exposure cap reduced to 50% of normal
Risk eventsClassification changes, score surges, and auto-restrictions all generate risk events
Operator callbacksS2S callbacks fire on classification change and auto-restriction

Frequently Asked Questions

How many trades before scoring is meaningful?

Technically, scoring activates after 1 resolved trade, but individual metrics have their own minimums (win rate needs 5, sizing needs 3). Scores become statistically reliable around 15–20 resolved trades. The auto-restriction threshold of 20 trades was chosen for this reason.

Can users game the score?

It’s difficult. Deliberately losing trades to lower a score also costs real money. The diversification metric prevents concentrating losses in cheap markets. The sizing metric catches users who try to lose small amounts while winning big. That said, a sufficiently motivated user could slowly degrade their score -but the manual admin override system catches these cases.

What score do new users get?

New users start with a default composite score of 35 (recreational). This is computed from the neutral defaults: win rate 50, edge 50, timing 0, sizing 50, diversity 10. The formula yields50*0.3 + 50*0.25 + 0*0.15 + 50*0.15 + 10*0.15 = 15 + 12.5 + 0 + 7.5 + 1.5 = 36.5, which rounds to 37. The slight discrepancy is because the score is recalculated fresh on first resolve.

Are scores shared across operators?

No. Scores are computed per-operator. A user who plays on Casino A and Casino B will have two independent scores. God Mode admins can see a cross-operator aggregate, but it doesn’t affect individual operator scoring.