METHodology

The Sitch method

A handy-dandy guide to exactly how we do things.

Overview

The Sitch methodology evaluates digital experience through a structured framework using 7 criteria, 27 sub-criteria and over 110 data points per brand. The framework measures each brand's digital presence against its competitive set, to score, rank and classify it relative to it’s category.

Method Overview Diagram

Criteria

Our scoring model produces a score out of 100 for each brand at both criteria and overall levels. The framework assesses all crucial parts of a brand's digital presence:

Market presence (10%)

Measures a brand's visibility, demand, and performance in digital relative to its market position and competitors.

Sub-Criteria:
  • Brand power
  • Brand punch
  • Traffic power
  • Traffic punch
Brand experience (15%)

Evaluates how well a brand executes with distinctiveness, consistency and high quality in digital.

Sub-Criteria:
  • Consistency
  • Distinctiveness
  • Execution quality
Content experience (20%)

Assesses the breadth, quality, relevance, flow and novelty of a brand's digital content.

Sub-Criteria:
  • Hygiene
  • Novelty
  • Flow
  • Quality
  • Customer support
Features & functions (20%)

Assesses the functionality, usability, and novelty of a brand's digital features and tools.

Sub-Criteria:
  • Flow
  • Hygiene
  • Novelty
  • Top-funnel Task Testing
Depth of Engagement (10%)

Measures how effectively a brand's digital presence captures and maintains user interest and interaction.

Sub-Criteria:
  • Visit Frequency
  • Visit Depth and Duration
  • Engagement Rate
Experience Fundamentals (15%)

Evaluates the technical and structural aspects that form the foundation of a brand's digital presence.

Sub-Criteria:
  • Structure
  • Performance
  • Accessibility
  • Cross-device Experience
Conversion Experience (10%)

Evaluates how effectively a brand's digital presence is designed to turn visitors into leads or customers.

Sub-Criteria:
  • Conversion Flow
  • Mid-funnel Task Testing
  • Full Conversion Task Testing
  • Visits to Sales Ratio

Process

Our evaluation follows seven key steps:

1. Parameter Setting

We determine the brand set for inclusion and define category-specific elements of the methodology. This includes identifying market participants, establishing category benchmarks, and defining relevant assessment criteria.

2. Literature Review

We review public research, data and industry news to form a view on the category state of play and trends. This includes market share data, industry reports, third party consumer research and category-specific performance benchmarks.

3. Expert Review

Career strategists review each brand's digital presence using a proprietary review framework. The review covers qualitative assessments of user experience, brand consistency, content quality, feature usability and cross-device compatibility.

4. Third-party Data

We collect analytics data from third-party sources (e.g. SEMRush and Ad Clarity) for each brand. This provides quantitative metrics including traffic statistics, engagement rates, search visibility, advertising activity and competitive positioning data.

5. Technical Tests

We carry out technical tests for each brand website to assess performance (e.g. Lighthouse, Screaming Frog). These measure site performance, accessibility compliance, SEO health indicators, cross-device responsiveness and security standards.

6. Data Processing

Data is ingested into our warehouse via our data schema to feed into our scoring model and dashboards. This step standardises data from all sources into comparable formats for scoring.

7. Analysis

We analyse scoreboards and data sets to create category analysis and insights, identifying patterns, trends and opportunities for improvement.

Sources

Core Data Tools

SEMRush

A comprehensive digital marketing analytics platform that provides data on search visibility, traffic patterns, user engagement, and competitive positioning. We use this data to assess search volumes, traffic performance and sources, depth of engagement, and relative digital performance.

Google Lighthouse

An automated website auditing tool that evaluates technical performance, accessibility, SEO fundamentals, and best practices. This data informs our assessment of experience fundamentals and technical capabilities.

Screaming Frog

A website crawling tool that provides detailed technical SEO and content structure analysis. We use this to evaluate site architecture, content organisation, and technical implementation quality.

WAVE Accessibility Tool

An accessibility evaluation tool that checks compliance with WCAG guidelines and identifies potential barriers to access. This data supports our assessment of inclusive design and accessibility standards.

Google Rich Results Test

A validation tool for structured data and search result presentation. We use this to evaluate how effectively brands implement technical SEO elements that enhance search visibility.

AdClarity

A competitive intelligence platform for digital advertising that provides insights into market activity and creative execution. This data helps assess advertising spend, reach and creative execution.

Additional Sources

We supplement our core tools with industry-specific market data, expert evaluations based on our proprietary framework, technical audits of specific features and functionality, and structured testing protocols for user experience assessment.

Scoring Methodology

Data Processing

Raw Data Collection

  • Quantitative metrics from third-party tools
  • Expert review scores (1-5 scale)
  • Technical test results
  • Binary / compliance checks (pass/fail)

Data Standardisation

All data points are converted to a standardised format based on their type:

Standard Metrics
  • Raw values are ranked within the category
  • Rankings are divided into quintiles
  • Each quintile is assigned a band (1-5)
Reverse Metrics
  • Used when lower values indicate better performance
  • Rankings are reversed before banding
  • Bottom performers receive higher bands
Binary Metrics
  • Pass results receive band 5 (100 points)
  • Fail results receive band 1 (0 points)
Pre-banded Metrics
  • Expert review scores (1-5) are used directly as bands
  • No additional conversion required

Score Calculation

Band Normalisation
  • Each band (1-5) is converted to a 0-100 scalesome text
    • Band 1 = 0-19
    • Band 2 = 20-39
    • Band 3 = 40-59
    • Band 4 = 60-79
    • Band 5 = 80-100
Weighting Application
  • Each datapoint has a pre-assigned weight
  • Normalised scores are multiplied by their weights
  • Weighted scores sum to sub-criteria level
  • Sub-criteria sum to criteria level
  • Criteria sum to overall score
Quality Controls
  • Minimum data requirements per score level (99% data completedness for inclusion)
  • Non-applicable and outlier datapoint handling
  • Multiple automated/technical and manual validation checks throughout process

Category-Specific Adjustments

Our methodology adapts to category nuances through:

  • Customised task tests for key user journeys
  • Adjusted criteria weightings for category relevance
  • Industry-specific benchmarks and standards

Performance Classes

As part of ranking, we use brand scores to organise all brands into five performance classes, at every level of the methodology: overall, criteria, sub-criteria and datapoint level.

  • Legend: Scores 80-100
  • Leader: Scores 60-79
  • Average: Scores 40-59
  • Trailing: Scores 20-39
  • Failing: Scores 0-19

Interpreting Results

Our scoring framework provides multiple levels of insight.

Key results

  • Score: 0-100 score at the overall, criteria and sub-criteria level
  • Class: Class classification at the overall, criteria, sub-criteria and datapoint level
  • Averages: Category and cohort averages for comparison
  • Competitor benchmarking: Competitor scores for comparison at various levels of the methodology (determined by category)

The Sitch methodology provides valuable insights for strategic planning

Identify strengths and weaknesses:
  • High-scoring criteria indicate areas of strength.
  • Low-scoring criteria highlight opportunities for improvement.
Competitive benchmarking:
  • Compare your scores to category averages and top performers.
  • Identify areas where you're leading or lagging in the competitive landscape.
Prioritise improvements:
  • Focus on low-scoring areas with high weightings for maximum impact.
  • Consider the effort required versus potential score improvement.
Track progress over time:
  • Use consistent scoring to monitor improvements in specific areas.
  • Identify trends in your digital performance relative to competitors.
Align digital strategy:
  • Use criteria scores to inform resource allocation and strategic initiatives.
  • Ensure your digital strategy addresses key areas measured in the assessment.
Capitalise on strengths:
  • Leverage high-scoring areas in marketing and communications.
  • Consider how to maintain and further enhance areas of strong performance

By thoroughly understanding your Sitch scores and how they're calculated, you can make informed decisions to enhance your digital presence and competitive positioning.

Key Outputs

The data collection process results not only in quantitative scoring outputs, but in deeper analysis using the contextual and qualitative insights collated throughout the process.

The key outputs of the methodology are:

Rankings

Brands within each category are ranked in order of their overall score, relative to the competitive set used in the analysis.

Category Analysis: State of Digital

A comprehensive analysis of the category results overall, informs a report that details the key findings from the study, and category level implications for brands.

Brand Drilldowns

Our brand drilldown feature, allows brands to explore their score in detail at the criteria, sub-criteria and datapoint level to see where they lead, where they lag, and get practical advice for how to improve.

Explorer Tools

Our interactive explorer tools allows brands to explore a subset of our data more deeply, compare their performance with other brands and extract specific data for benchmarking and/or reporting. 

Case Studies

Our case studies document best-in-class, novel or otherwise interesting content or features with short descriptions and screenshots. These case studies allow digital practitioners to search, filter and browse category-specific digital examples, to get a fast and thorough sense of the category standards, and to inform their own digital projects.

Deep Dives

Our deep dives take key themes and findings from the study, and explore them more deeply. This could include assessing broader category trends with the power of the study dataset, or going deeper on specific sub-criteria or task tests to reveal anomalies, best practices or patterns. 

Study Parameters

Automotive 2024

The data collection for Automotive 2024 was conducted in July and August 2024, with focus on the YTD period July 2023 to August 2024 for annual data.

The brands included were:

Alfa Romeo, Aston Martin, Audi, Bentley, BMW, BYD, Chery, Citroen, CUPRA, Ferrari, Fiat, Ford, Genesis, GWM, Honda, Hyundai, Isuzu Ute, Jaguar, Jeep, Kia, Lamborghini, Land Rover, LDV, Lexus, Lotus, Maserati, Mazda, McLaren, Mercedes-Benz, MG, MINI, Mitsubishi, Nissan, Peugeot, Polestar, Porsche, RAM, Renault, Rolls-Royce, Skoda, SsangYong, Subaru, Suzuki, Tesla, Toyota, Volkswagen, Volvo.

The study was designed to include brands that sell passenger cars, including small, medium, executive and luxury cars, as well as SUVs, light utility vehicles, vans and sportscars. Brands that sell buses, trucks, motorcycles and heavy commercial vehicles were excluded from the study. In cases in which brands sell a combination of passenger cars and excluded vehicles, discretion was taken with regard to their inclusion.

More Questions

For additional information or specific enquiries:

Contact us at hello@gositch.com.au