User Scoring Engine
The Predictu scoring engine computes a composite score for every user who has placed at least one resolved trade. The score drives tier progression, risk engine decisions, and operator-facing analytics. Scores range from 0 to 100 and are recalculated daily in a batch cron job.
03:00 UTC and re-scores every user who had at least one trade resolve in the previous 24 hours, plus a full re-score of all active users every Sunday at 04:00 UTC.Overview
Each user receives a single numerical score that captures how skilled, disciplined, and profitable they are relative to the house. The scoring engine exists so operators can automatically identify sharp bettors, protect margins, and reward recreational players with better limits. Scores are not exposed to end users; they are visible only in the God Mode and Operator dashboards.
The composite score is a weighted sum of five independent metrics. Each metric is first normalized to a 0–100 scale, then multiplied by its weight. The final score is clamped to [0, 100].
composite = clamp(
win_rate_score * 0.30
+ edge_score * 0.25
+ timing_score * 0.15
+ sizing_score * 0.15
+ diversity_score * 0.15
, 0, 100)The Five Metrics
1. Win Rate (30% weight)
Win rate is the most heavily weighted metric because a consistently winning bettor is the clearest signal of sharp activity. It measures the ratio of resolved trades that settled in the user’s favor to total resolved trades.
win_rate = resolved_wins / resolved_total
win_rate_score = win_rate * 100 // 50% win rate = 50 pts, 80% = 80 pts| Win Rate | Raw Score | Weighted Contribution |
|---|---|---|
| 30% | 30 | 9.0 |
| 50% | 50 | 15.0 |
| 65% | 65 | 19.5 |
| 80% | 80 | 24.0 |
| 95% | 95 | 28.5 |
Only resolved trades count. Open positions are excluded. A minimum of 5 resolved trades is required before the win rate metric becomes active; until then it defaults to 50 (neutral).
2. Edge Captured (25% weight)
Edge captured measures how much theoretical value a user extracts relative to fair market prices. It answers the question: “Is this user consistently finding mispriced markets?”
// For each winning trade:
edge_per_trade = payout - cost
// Aggregate:
total_edge = sum(edge_per_trade for all resolved trades)
total_risk = sum(cost for all resolved trades)
edge_pct = total_edge / total_risk
// Normalize to 0-100:
// -50% edge = 0 pts, 0% edge = 50 pts, +50% edge = 100 pts
edge_score = clamp((edge_pct + 0.5) * 100, 0, 100)| Edge % | Raw Score | Weighted Contribution |
|---|---|---|
| -30% | 20 | 5.0 |
| 0% | 50 | 12.5 |
| +20% | 70 | 17.5 |
| +40% | 90 | 22.5 |
| +50% or more | 100 | 25.0 |
The edge metric is particularly important for detecting users who might be using external models or information advantages. A user with a modest win rate but high edge is typically buying large positions on underpriced outcomes.
3. Timing (15% weight)
Timing rewards users who enter positions early when prices are far from implied fair value. The intuition is that sharp bettors buy YES at low prices (before the market moves up) or buy NO when YES is expensive (before the market corrects). Specifically:
- Good YES timing: Bought YES at a price below 60 cents
- Good NO timing: Bought NO at a price above 40 cents (equivalently, YES was above 60 cents)
// For each resolved winning trade:
is_well_timed = (side === 'YES' && entry_price < 0.60)
|| (side === 'NO' && entry_price > 0.40)
well_timed_count = count(is_well_timed for wins)
timing_ratio = well_timed_count / resolved_wins
// Scale:
timing_score = timing_ratio * 100| Timing Ratio | Raw Score | Weighted Contribution |
|---|---|---|
| 0% (never well-timed) | 0 | 0.0 |
| 25% | 25 | 3.75 |
| 50% | 50 | 7.5 |
| 75% | 75 | 11.25 |
| 100% (always well-timed) | 100 | 15.0 |
0. This is intentional -a losing user with good timing is still a losing user.4. Sizing Discipline (15% weight)
Sizing discipline compares the average trade size on winning trades to the average trade size on losing trades. A sharp bettor sizes up when they have conviction (winners) and keeps losers small. This is a hallmark of professional trading.
avg_win_size = mean(trade_amount for resolved_wins)
avg_loss_size = mean(trade_amount for resolved_losses)
// Ratio: >1 means bigger winners than losers (sharp behavior)
if (avg_loss_size === 0) {
sizing_ratio = 3.0 // all wins, max out
} else {
sizing_ratio = avg_win_size / avg_loss_size
}
// Normalize: ratio of 0.5 = 25 pts, 1.0 = 50 pts, 2.0 = 100 pts
sizing_score = clamp(sizing_ratio * 50, 0, 100)| Win/Loss Size Ratio | Interpretation | Raw Score | Weighted |
|---|---|---|---|
| 0.3x | Winners are much smaller than losers | 15 | 2.25 |
| 0.5x | Winners are half the size of losers | 25 | 3.75 |
| 1.0x | Equal sizing | 50 | 7.5 |
| 1.5x | Winners 50% larger | 75 | 11.25 |
| 2.0x+ | Winners double losers (professional) | 100 | 15.0 |
This metric has a minimum trade count requirement of 3 resolved trades before it activates. Below that threshold, sizing defaults to 50 (neutral).
5. Diversification (15% weight)
Diversification measures how many distinct markets a user has traded across. A user who only bets on one market is likely recreational; a user spread across many markets is either an algorithm or a sophisticated bettor.
unique_markets = count(distinct market_id for all trades)
// Scoring curve (lookup table with interpolation):
// 1 market = 10 pts
// 2 markets = 25 pts
// 3 markets = 45 pts
// 4 markets = 65 pts
// 5 markets = 80 pts
// 8 markets = 90 pts
// 12+ markets = 100 pts
diversity_score = interpolate(unique_markets, DIVERSITY_CURVE)| Unique Markets | Raw Score | Weighted Contribution |
|---|---|---|
| 1 | 10 | 1.5 |
| 2 | 25 | 3.75 |
| 3 | 45 | 6.75 |
| 4 | 65 | 9.75 |
| 5 | 80 | 12.0 |
| 8 | 90 | 13.5 |
| 12+ | 100 | 15.0 |
The curve is deliberately steep early on -going from 1 to 3 markets quadruples the score. This reflects the reality that most recreational users stick to 1–2 markets while professional bettors systematically scan for edges across the full catalog.
Composite Score Example
Let’s walk through a concrete example for a user named “Alice” who has 30 resolved trades:
| Metric | Raw Value | Raw Score | Weight | Contribution |
|---|---|---|---|---|
| Win Rate | 70% | 70 | 30% | 21.0 |
| Edge Captured | +18% | 68 | 25% | 17.0 |
| Timing | 60% well-timed | 60 | 15% | 9.0 |
| Sizing | 1.4x ratio | 70 | 15% | 10.5 |
| Diversification | 6 markets | 84 | 15% | 12.6 |
composite = 21.0 + 17.0 + 9.0 + 10.5 + 12.6
= 70.1
classification = "sharp" // falls in the 70-85 rangeAlice scores 70.1, placing her in the sharp classification. She is not yet auto-restricted because she hasn’t crossed the 90-point threshold, but she’ll appear in the risk dashboard with a yellow flag.
Classification Thresholds
Once the composite score is computed, users are bucketed into one of four classifications. These classifications drive UI badges in the admin dashboards and feed into automated risk rules.
| Classification | Score Range | Description | Risk Action |
|---|---|---|---|
| Recreational | 0 – 39 | Casual bettor, house-favorable. Low win rate, poor timing, small edge. | None. Full limits apply per tier. |
| Moderate | 40 – 69 | Above average but not consistently profitable. May have streaks. | Monitored. Appears in weekly risk summary. |
| Sharp | 70 – 84 | Consistently profitable. Good timing and sizing. Possible model user. | Flagged. Yellow badge in admin. Included in daily risk email. |
| Professional | 85 – 100 | Extremely skilled. Likely using quantitative models or inside info. | Red badge. Immediate review recommended. Auto-restriction possible. |
Auto-Restriction
The scoring engine can automatically restrict users who meet both of the following conditions:
- Composite score ≥ 90
- At least 20 resolved trades (to prevent false positives from small samples)
When auto-restriction triggers, the following happens:
- The user’s tier is changed to
restricted - Their per-trade limit drops to $5
- A +3% spread is added to all their quotes
- A risk event of type
AUTO_RESTRICTis created - The operator receives an S2S callback of type
USER_RESTRICTED - An entry is added to the audit log with the score breakdown
// Auto-restriction logic (simplified)
if (user.composite_score >= 90 && user.resolved_trade_count >= 20) {
await setUserTier(user.id, 'restricted');
await createRiskEvent({
type: 'AUTO_RESTRICT',
user_id: user.id,
details: {
score: user.composite_score,
classification: 'professional',
resolved_trades: user.resolved_trade_count,
metrics: user.score_breakdown
}
});
await notifyOperator(user.operator_id, 'USER_RESTRICTED', { user_id: user.id });
}Batch Scoring Cron Job
Scoring runs as a server-side cron job, not in real-time. This keeps trade execution fast and avoids latency spikes during high-volume periods.
Daily Incremental Scoring
Runs at 03:00 UTC every day. It selects all users who had at least one trade resolve in the past 24 hours, recomputes their scores, and updates the user_scores table.
// Cron: 0 3 * * * (daily at 03:00 UTC)
// Job: score_users_daily
1. SELECT DISTINCT user_id FROM trades
WHERE resolved_at > NOW() - INTERVAL '24 hours'
2. For each user_id:
a. Fetch all resolved trades
b. Compute 5 metrics
c. Compute composite score
d. Determine classification
e. UPSERT into user_scores table
f. If classification changed → create risk_event
g. If auto-restriction triggered → update tier + notifyWeekly Full Re-Score
Runs every Sunday at 04:00 UTC. This is a complete re-score of every user with at least one resolved trade, regardless of recent activity. It catches edge cases where historical trade data was corrected or markets were re-settled.
// Cron: 0 4 * * 0 (Sundays at 04:00 UTC)
// Job: score_users_weekly_full
1. SELECT DISTINCT user_id FROM trades
WHERE resolved_at IS NOT NULL
2. Full re-score pipeline (same as daily, but all users)
3. Generate weekly scoring summary report
4. Email summary to God Mode adminsData Model
Scores are stored in the user_scores table, which is updated by the cron jobs and read by the admin dashboards and risk engine.
CREATE TABLE user_scores (
user_id UUID PRIMARY KEY REFERENCES users(id),
composite_score NUMERIC(5,2) NOT NULL DEFAULT 0,
classification TEXT NOT NULL DEFAULT 'recreational',
win_rate_score NUMERIC(5,2) NOT NULL DEFAULT 50,
edge_score NUMERIC(5,2) NOT NULL DEFAULT 50,
timing_score NUMERIC(5,2) NOT NULL DEFAULT 0,
sizing_score NUMERIC(5,2) NOT NULL DEFAULT 50,
diversity_score NUMERIC(5,2) NOT NULL DEFAULT 10,
resolved_trades INTEGER NOT NULL DEFAULT 0,
scored_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);| Column | Type | Description |
|---|---|---|
composite_score | NUMERIC(5,2) | Final weighted score, 0–100 |
classification | TEXT | One of: recreational, moderate, sharp, professional |
win_rate_score | NUMERIC(5,2) | Raw win rate metric, 0–100 |
edge_score | NUMERIC(5,2) | Raw edge captured metric, 0–100 |
timing_score | NUMERIC(5,2) | Raw timing metric, 0–100 |
sizing_score | NUMERIC(5,2) | Raw sizing discipline metric, 0–100 |
diversity_score | NUMERIC(5,2) | Raw diversification metric, 0–100 |
resolved_trades | INTEGER | Total resolved trades at time of scoring |
scored_at | TIMESTAMPTZ | Last time this user was scored |
Score History & Trends
Every time a user’s score changes by more than 5 points, a snapshot is written to theuser_score_history table. This allows God Mode admins to view score trends over time and identify users who are rapidly improving (potential model deployment).
CREATE TABLE user_score_history (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id),
composite_score NUMERIC(5,2) NOT NULL,
classification TEXT NOT NULL,
delta NUMERIC(5,2) NOT NULL, -- change from previous
snapshot JSONB NOT NULL, -- full metric breakdown
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);The God Mode dashboard renders a sparkline chart from this data, showing the user’s score trajectory over the last 90 days. A rapidly rising score (e.g., +20 points in a week) triggers aSCORE_SURGE risk event.
Operator & Admin Overrides
God Mode admins can override any aspect of the scoring system:
- Manual classification: Force a user into any classification regardless of score
- Score freeze: Lock a user’s score so cron jobs skip them
- Tier override: Set any tier independently of the scoring engine
- Auto-restriction toggle: Disable auto-restriction for a specific user (e.g., a known VIP whale the operator wants to keep)
All overrides are logged in the audit trail with the admin’s user ID and a required reason field.
Interaction with Risk Engine
The scoring engine feeds directly into several risk engine subsystems:
| Risk Feature | How Scoring Integrates |
|---|---|
| Per-trade limits | Tier (influenced by score) sets the max trade amount |
| Spread adjustments | Restricted tier adds +3% spread; custom spreads can be set per classification |
| Circuit breakers | When a sharp/professional user places a large trade, the circuit breaker threshold is lowered |
| Exposure caps | Professional users have their individual exposure cap reduced to 50% of normal |
| Risk events | Classification changes, score surges, and auto-restrictions all generate risk events |
| Operator callbacks | S2S callbacks fire on classification change and auto-restriction |
Frequently Asked Questions
How many trades before scoring is meaningful?
Technically, scoring activates after 1 resolved trade, but individual metrics have their own minimums (win rate needs 5, sizing needs 3). Scores become statistically reliable around 15–20 resolved trades. The auto-restriction threshold of 20 trades was chosen for this reason.
Can users game the score?
It’s difficult. Deliberately losing trades to lower a score also costs real money. The diversification metric prevents concentrating losses in cheap markets. The sizing metric catches users who try to lose small amounts while winning big. That said, a sufficiently motivated user could slowly degrade their score -but the manual admin override system catches these cases.
What score do new users get?
New users start with a default composite score of 35 (recreational). This is computed from the neutral defaults: win rate 50, edge 50, timing 0, sizing 50, diversity 10. The formula yields50*0.3 + 50*0.25 + 0*0.15 + 50*0.15 + 10*0.15 = 15 + 12.5 + 0 + 7.5 + 1.5 = 36.5, which rounds to 37. The slight discrepancy is because the score is recalculated fresh on first resolve.
Are scores shared across operators?
No. Scores are computed per-operator. A user who plays on Casino A and Casino B will have two independent scores. God Mode admins can see a cross-operator aggregate, but it doesn’t affect individual operator scoring.
