METHodology
A handy-dandy guide to exactly how we do things.
The Sitch methodology evaluates digital experience through a structured framework using 7 criteria, 27 sub-criteria and over 110 data points per brand. The framework measures each brand's digital presence against its competitive set, to score, rank and classify it relative to it’s category.
Our scoring model produces a score out of 100 for each brand at both criteria and overall levels. The framework assesses all crucial parts of a brand's digital presence:
Measures a brand's visibility, demand, and performance in digital relative to its market position and competitors.
Evaluates how well a brand executes with distinctiveness, consistency and high quality in digital.
Assesses the breadth, quality, relevance, flow and novelty of a brand's digital content.
Assesses the functionality, usability, and novelty of a brand's digital features and tools.
Measures how effectively a brand's digital presence captures and maintains user interest and interaction.
Evaluates the technical and structural aspects that form the foundation of a brand's digital presence.
Evaluates how effectively a brand's digital presence is designed to turn visitors into leads or customers.
Our evaluation follows seven key steps:
We determine the brand set for inclusion and define category-specific elements of the methodology. This includes identifying market participants, establishing category benchmarks, and defining relevant assessment criteria.
We review public research, data and industry news to form a view on the category state of play and trends. This includes market share data, industry reports, third party consumer research and category-specific performance benchmarks.
Career strategists review each brand's digital presence using a proprietary review framework. The review covers qualitative assessments of user experience, brand consistency, content quality, feature usability and cross-device compatibility.
We collect analytics data from third-party sources (e.g. SEMRush and Ad Clarity) for each brand. This provides quantitative metrics including traffic statistics, engagement rates, search visibility, advertising activity and competitive positioning data.
We carry out technical tests for each brand website to assess performance (e.g. Lighthouse, Screaming Frog). These measure site performance, accessibility compliance, SEO health indicators, cross-device responsiveness and security standards.
Data is ingested into our warehouse via our data schema to feed into our scoring model and dashboards. This step standardises data from all sources into comparable formats for scoring.
We analyse scoreboards and data sets to create category analysis and insights, identifying patterns, trends and opportunities for improvement.
A comprehensive digital marketing analytics platform that provides data on search visibility, traffic patterns, user engagement, and competitive positioning. We use this data to assess search volumes, traffic performance and sources, depth of engagement, and relative digital performance.
An automated website auditing tool that evaluates technical performance, accessibility, SEO fundamentals, and best practices. This data informs our assessment of experience fundamentals and technical capabilities.
A website crawling tool that provides detailed technical SEO and content structure analysis. We use this to evaluate site architecture, content organisation, and technical implementation quality.
An accessibility evaluation tool that checks compliance with WCAG guidelines and identifies potential barriers to access. This data supports our assessment of inclusive design and accessibility standards.
A validation tool for structured data and search result presentation. We use this to evaluate how effectively brands implement technical SEO elements that enhance search visibility.
A competitive intelligence platform for digital advertising that provides insights into market activity and creative execution. This data helps assess advertising spend, reach and creative execution.
We supplement our core tools with industry-specific market data, expert evaluations based on our proprietary framework, technical audits of specific features and functionality, and structured testing protocols for user experience assessment.
All data points are converted to a standardised format based on their type:
Our methodology adapts to category nuances through:
As part of ranking, we use brand scores to organise all brands into five performance classes, at every level of the methodology: overall, criteria, sub-criteria and datapoint level.
Our scoring framework provides multiple levels of insight.
By thoroughly understanding your Sitch scores and how they're calculated, you can make informed decisions to enhance your digital presence and competitive positioning.
The data collection process results not only in quantitative scoring outputs, but in deeper analysis using the contextual and qualitative insights collated throughout the process.
The key outputs of the methodology are:
Brands within each category are ranked in order of their overall score, relative to the competitive set used in the analysis.
A comprehensive analysis of the category results overall, informs a report that details the key findings from the study, and category level implications for brands.
Our brand drilldown feature, allows brands to explore their score in detail at the criteria, sub-criteria and datapoint level to see where they lead, where they lag, and get practical advice for how to improve.
Our interactive explorer tools allows brands to explore a subset of our data more deeply, compare their performance with other brands and extract specific data for benchmarking and/or reporting.
Our case studies document best-in-class, novel or otherwise interesting content or features with short descriptions and screenshots. These case studies allow digital practitioners to search, filter and browse category-specific digital examples, to get a fast and thorough sense of the category standards, and to inform their own digital projects.
Our deep dives take key themes and findings from the study, and explore them more deeply. This could include assessing broader category trends with the power of the study dataset, or going deeper on specific sub-criteria or task tests to reveal anomalies, best practices or patterns.
The data collection for Automotive 2024 was conducted in July and August 2024, with focus on the YTD period July 2023 to August 2024 for annual data.
The brands included were:
Alfa Romeo, Aston Martin, Audi, Bentley, BMW, BYD, Chery, Citroen, CUPRA, Ferrari, Fiat, Ford, Genesis, GWM, Honda, Hyundai, Isuzu Ute, Jaguar, Jeep, Kia, Lamborghini, Land Rover, LDV, Lexus, Lotus, Maserati, Mazda, McLaren, Mercedes-Benz, MG, MINI, Mitsubishi, Nissan, Peugeot, Polestar, Porsche, RAM, Renault, Rolls-Royce, Skoda, SsangYong, Subaru, Suzuki, Tesla, Toyota, Volkswagen, Volvo.
The study was designed to include brands that sell passenger cars, including small, medium, executive and luxury cars, as well as SUVs, light utility vehicles, vans and sportscars. Brands that sell buses, trucks, motorcycles and heavy commercial vehicles were excluded from the study. In cases in which brands sell a combination of passenger cars and excluded vehicles, discretion was taken with regard to their inclusion.
For additional information or specific enquiries:
Contact us at hello@gositch.com.au