Server-Side Fingerprinting Explained: How Tracking Works Without Cookies
By Karel Kubicek, Senior Privacy Researcher | March 25, 2026
Key Takeaways
- Server-side fingerprinting sends raw browser data to a centralized backend that uses machine learning to link user sessions, even when underlying attributes change. Unlike client-side hashing, which breaks when a user updates their browser or resizes a window, server-side systems tolerate attribute fluctuations by selectively weighting features based on reliability rather than raw uniqueness.
- Most browser privacy tools fail against server-side fingerprinting. Research testing Fingerprint Pro found that Firefox Strict ETP, Brave, Canvas blockers, and User-Agent switchers were all defeated by server-side re-identification. Only the Tor browser and Firefox’s Resist Fingerprinting mode reliably prevented tracking, and combining any tool with a VPN further degraded the fingerprinter’s accuracy.
- “Cookieless” tracking is not compliant tracking. European regulators have already fined Criteo (€40M), Apple (€8M), and Vueling (€30K) for fingerprinting practices, and both the EDPB and CNIL have ruled that device fingerprinting requires opt-in consent under the ePrivacy Directive, the same standard applied to cookies.
In a previous post on Vault JS, Device Fingerprinting: Tracking Without Cookies or Consent, we discussed how the digital exhaust of a user’s device can be combined to track individuals without storing a single cookie on their machine. Screen resolutions, installed fonts, and hardware specifications create a unique profile. We highlighted how this opaque technique is rapidly becoming the tracking mechanism of choice as traditional cookies face heavy deprecation and regulatory scrutiny.
As Laperdrix et al. rigorously documented in their foundational survey, Browser Fingerprinting: A Survey [2], device fingerprinting has long been a potent, stateless tracking vector. But the cat-and-mouse game has evolved. We are no longer just dealing with static, client-side scripts. To truly understand the modern tracking landscape, we must look at the shift to server-side processing of browser fingerprinting.
A recent study by Luo et al., titled Understanding Server-side Commercial Fingerprinting [1], provides an unprecedented look under the hood of this paradigm. By analyzing this new research, we can dissect exactly how modern fingerprinters outsmart browser protection.
How Does Server-Side Fingerprinting Differ from Client-Side Hashing?
Historically, fingerprinting happened on the client side. A JavaScript library would gather various browser attributes, concatenate them, and generate a static hash. If a user updated their browser version or changed their window size, the hash would break, and the user would appear as a new visitor.
Server-side processing changes the rules. Instead of hashing locally, scripts exfiltrate the raw browser data payload back to a centralized backend. There, providers use machine learning, fuzzy matching algorithms, and long-term state to link user sessions even when underlying attributes fluctuate. In essence, applying machine learning to fingerprinting has dramatically improved the reliability and consistency of user identification.
How Did Researchers Test Server-Side Fingerprinting?
To study this, Luo et al. took a clever grey-box testing approach. The setup is highly valuable given the industry’s shift because the researchers controlled every possible moving part of the experiment. They purchased access to Fingerprint Pro, a market-leading commercial fingerprinting service, and embedded it on a website they fully controlled. They then ran a custom automated crawler to visit the site and programmatically retrieve the resulting visitor identifiers from the service’s dashboard.
The researchers used realistic, synthetic browser profiles generated by the open-source Apify Fingerprint Suite alongside an extended version of the FP-Spoofer browser extension from Lin et al. [3]. By systematically mutating single attributes, they could observe exactly when the backend generated a new visitor ID (VID, as referred to byFingerprint Pro) and when it successfully matched the user to an old profile.
Which Browser Attributes Does Server-Side Fingerprinting Actually Use?
One of the most fascinating scientific takeaways from this study becomes clear when we view it through the lens of information entropy (measure of uniqueness).
Prior research, such as Bacis et al.’s Assessing Web Fingerprinting Risk [4], has demonstrated that collecting a massive surface of attributes yields incredibly high information entropy. High-entropy features, like Canvas readouts and plugin arrays, have traditionally been viewed as the ultimate tracking vectors.
However, the attribute modification tests (table below as taken from [1]) reveal a paradox. The researchers tested 26 distinct browser attributes. They found that modifying high-entropy features in isolation does not change the identifier. The service essentially ignores or de-weights these features, so installing or removing a plugin or font does not fool the tracker to identify you as a different user.
Table 3: Results of modifying a single browser attribute, grouped by how the server-side algorithm responds. Notice how changing the IP address to a new subnet makes the algorithm much stricter regarding Canvas, JS Fonts, and WebGL Shader Precisions.
| Outcome | Any IP | Same IP Only |
|---|---|---|
| Successful re-identification (Unchanged identifier) |
User-Agent (version updates), Screen resolution, Do not track, Plugins list, WebGL context parameters, AudioContext, Emoji, MathML, Font preferences, Device memory | Canvas (toDataURL), JS fonts list, WebGL shader precisions |
| Seen as new user (Changed identifier) |
HTTP Accept header, Color depth, Hardware concurrency, WebGL (renderer/vendor, extensions list, context attributes), Math operations | (None) |
| Seen as new user + flagged for tampering (Browser tampering detected) |
User-Agent (major change), Platform, OS CPU, Vendor | (None) |
| Seen as new user + flagged for bot detection (Bot detected) |
HTTP Content-Language | (None) |
Yet, the rule is not as simple as “high entropy is ignored, low entropy is trusted.” For instance, modifying WebGL attributes (such as the Renderer, Extensions, and Unmasked Vendor) reliably changes the visitor identifier, even though these are highly unique, high-entropy features. Conversely, altering deviceMemory, which is a low-entropy feature since very few users physically upgrade their RAM, does not change the identifier at all.
The server-side algorithm selectively weights features based on reliability rather than raw uniqueness. It ignores surfaces that are noisy or frequently randomized by basic privacy extensions (like Canvas fingerprinting), but it heavily penalizes changes in harder-to-spoof hardware rendering pipelines (WebGL) and foundational hardware limits (hardwareConcurrency, colorDepth).
How Do IP Addresses and Cookies Strengthen Server-Side Fingerprinting?
Server-side fingerprinting does not operate in a vacuum. It uses exogenous factors to continuously train its re-identification models, and the research confirms that the IP address remains one of the most reliable fingerprinting vectors available.
When a user modifies a feature like the Canvas readout or JS Fonts from the same baseline IP address, the system tolerates the change. However, when those exact same attributes are mutated in conjunction with changing the IP address to a completely new subnet, Fingerprint Pro immediately issues a new identifier. The IP address acts as a critical anchor that dictates how strictly the backend evaluates the rest of the browser profile.
Interestingly, Fingerprint Pro treats VPNs very specifically. When a user routes their traffic through a VPN, the system’s sensitivity to attribute changes drops dramatically. As the data shows, almost all attributes that normally trigger a new identifier (like WebGL changes, HTTP Accept headers, or Math operations) suddenly result in successful re-identification. Because the tracker recognizes that the network environment is explicitly masking identity, it stops relying on network or easily spoofed software signals. Instead, it leans entirely on immutable hardware traits (like colorDepth and hardwareConcurrency) and aggressively fuzzy-matches the rest of the profile.
Furthermore, if the service manages to set a first-party cookie, that persistent state overrules almost any other attribute mutation. The combination of stable IP addresses, VPN awareness, and long-lived cookies provides the exact supervisory signals needed to track users as their hardware and software naturally evolve over time.
Do Browser Privacy Tools Actually Stop Server-Side Fingerprinting?
The true impact of server-side processing is highlighted in the study’s evaluation of Privacy-Enhancing Technologies (PETs), mapped out in their Table 4 results.
The researchers tested a battery of popular defenses, including Firefox Strict Enhanced Tracking Protection (ETP), the Brave browser, Canvas blockers, and User-Agent switchers. As a baseline, every single one of these tools successfully defeated traditional client-side fingerprinting libraries.
Against the server-side Fingerprint Pro service, however, they mostly failed. The backend successfully re-identified the browsers across sessions despite the randomized data. The system has a high tolerance for fine-grained changes, easily tracking users even when they switch into private or incognito browsing modes. Only extreme measures, such as the Tor browser or Firefox’s “Resist Fingerprinting” (RFP) mode, restricted enough APIs and enforced enough uniformity to reliably thwart the server-side tracking.
Table 4: Effectiveness of browser anti–fingerprinting measures.
| Privacy-Enhancing Tool | Anti-Fingerprinting Measures | Client-Side (FingerprintJS) |
Server-Side (Fingerprint Pro) |
Server + VPN |
|---|---|---|---|---|
| Firefox Strict ETP | + Canvas randomization + Font and math operation uniformity |
Y | N | Y |
| FF.resistFingerprinting (RFP) |
Many APIs report uniform values | Y | Y | Y |
| Brave Browser | + Some API restriction + randomization |
Y | N | Y |
| Tor browser | + Heavy API restriction + uniform reporting |
Y | Y | Y |
Is Server-Side Fingerprinting Used for Security or Just Advertising?
Why are companies investing so heavily in such invasive technologies? As Senol et al. point out in The Double Edged Sword: Identifying Authentication Pages and their Fingerprinting Behavior [5], fingerprinting is not just for targeted advertising. While tools like Fingerprint Pro are famous for general visitor identification, they cross heavily into the security realm. These services are aggressively deployed on login and sign-up pages to protect against account takeovers and fraud, leveraging inconsistencies in features to trigger bot detection. Mitigations against fingerprinting must reckon with this dual nature. Blocking these tracking scripts protects privacy, but it can concurrently blind crucial security defenses.
Does "Cookieless" Tracking Still Require User Consent Under GDPR?
Why should privacy professionals care about the shift to server-side tracking? Because “cookieless” does not mean compliant.
According to WP29 Opinion 9/2014 [7] and the newer EDPB 2/2023 guidelines [8], fingerprinting falls squarely under the technical scope of accessing terminal device information. Simply put: device fingerprinting requires opt-in consent under the ePrivacy Directive, just like cookies.
European regulators are already enforcing this, even when companies attempt to hide behind complex, server-side probabilistic models:
- CNIL vs. Criteo (€40M fine, 2023): The French regulator fined Criteo for its tracking practices, specifically calling out [9] the collection of User Agents and hashed IP addresses. The CNIL noted that Criteo used a data table to run “probabilistic” reconciliation of identifiers, guessing that two sessions belonged to the same user without a direct cookie ID.
- AEPD vs. Vueling (€30k fine, 2022): In an enforcement action against the airline Vueling, the Spanish regulator (AEPD) explicitly ruled that device fingerprinting is subject to Article 22.2 of the LSSI (the Spanish ePrivacy Law) [10]. In their subsequent 2024 technical guidance, the AEPD directly attacked “Cookieless Monsters”, stating that Canvas and Font fingerprinting create highly precise profiles and are strictly illegal for marketing without opt-in consent.
- CNIL vs. Apple (€8M fine, 2022): Apple was fined for fingerprinting mobile devices on the App Store prior to obtaining user consent [11].
The tracking arms race is accelerating. As Iqbal et al. demonstrated in Fingerprinting the Fingerprinters [6], machine learning tools are finding fingerprinting scripts on over 10% of the Web’s top 100K sites. Privacy professionals need to be keenly aware that even when blocking cookies in “no consent” scenarios, third-party vendors may still be fingerprinting users and collecting private data in the background.
This is where Vault JS bridges the gap. Vault JS provides deep visibility into all types of private data collection, including obfuscated server-side fingerprinting. By leveraging research methods from studies like Luo et al., Vault JS actively classifies tracking payloads by their privacy risks, detects probabilistic matching, and exposes cookie syncing. You cannot govern what you cannot see, and Vault JS ensures these invisible tracking vectors are brought to light.
(For a brief, critical assessment of the research methodology and its limitations, please see the Appendix below.)
References:
- Luo, E., Ritter, T., Savage, S., & Voelker, G. M. (2026). Understanding Server-side Commercial Fingerprinting. Proceedings of the ACM Web Conference 2026, Dubai, United Arab Emirates. ACM ISBN 979-8-4007-2307-0/2026/04. https://doi.org/10.1145/3774904.3792687
- Laperdrix, P., Bielova, N., Baudry, B., & Avoine, G. (2020). Browser Fingerprinting: A Survey. ACM Transactions on the Web (TWEB).
- Lin, Xu, et al. Phish in sheep’s clothing: Exploring the authentication pitfalls of browser fingerprinting. 31st USENIX Security Symposium (USENIX Security 22). 2022.
- Bacis, E., et al. (2024). Assessing Web Fingerprinting Risk. Proceedings of the ACM Web Conference 2024.
- Senol, A., Ukani, A., Cutler, D., & Bilogrevic, I. (2024). The Double Edged Sword: Identifying Authentication Pages and their Fingerprinting Behavior. Proceedings of the ACM Web Conference 2024.
- Iqbal, U., Englehardt, S., & Shafiq, Z. (2021). Fingerprinting the Fingerprinters: Learning to Detect Browser Fingerprinting Behaviors. IEEE Symposium on Security and Privacy (S&P).
- Article 29 Data Protection Working Party (2014). Opinion 9/2014 on the application of Directive 2002/58/EC to device fingerprinting.
- European Data Protection Board (EDPB) (2023). Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive.
- Commission Nationale de l’Informatique et des Libertés (CNIL) (2023). Deliberation SAN-2023-009 of June 15, 2023 concerning the company CRITEO.
- Agencia Española de Protección de Datos (AEPD) (2022). Resolution PS/00077/2022 (Vueling Airlines, S.A.); see also AEPD Technical Guidelines (2024).
- Commission Nationale de l’Informatique et des Libertés (CNIL) (2022). Deliberation SAN-2022-023 of December 22, 2022 concerning APPLE DISTRIBUTION INTERNATIONAL.
Appendix: Critical Assessment of the Study's Limitations
While Luo et al.’s work unlocks a new avenue for studying these systems, viewing it critically reveals a few areas where the scientific community must dig deeper:
- Vendor Generality: This work is a case study on Fingerprint Pro. To confirm that server-side fingerprinting behaves this way across the broader ecosystem, future experiments should evaluate other vendors, such as Shopify or pure-play anti-fraud platforms.
- Expected vs. Synthetic Changes: Testing synthetic mutations is excellent, but analyzing “expected” natural changes over time (like a standard browser update or a natural resize of the browser window) would provide the complementary real-world context on Fingerprint Pro’s effectiveness for their customers.
- Discrete vs. Parallel Mutations: The study evaluates changes one attribute at a time. That is, however, never the reality, where multiple parameters change at once. The computational complexity of such an experiment excuses the authors. However, testing at least pairs of parallel changes could help determine if Fingerprint Pro relies on deterministic linear weighting or more complex machine learning models.

Karel Kubicek
Senior Privacy Researcher, Vault JS
He holds a PhD from ETH Zurich in automated privacy compliance and was previously a postdoctoral researcher at INRIA. His work focuses on using machine learning to measure and detect privacy violations at scale, and he led the development of CookieBlock, a privacy-enhancing browser extension with over 20,000 installations that received a USENIX Security Distinguished Artifact Award.
The Privacy Laws That Can Send Executives to Prison
Executives face criminal liability under global privacy laws, including prison sentences in the U.S., EU, and beyond. This guide breaks down where the risk exists...
Read More
IAB Multi-State Privacy Agreement (MSPA) Update 2026: What Advertisers Need to Know
A report out of Carnegie Mellon’s School of Public Policy found that “87% (216 million of 248 million) of the population in the United States...
Read More
What Changed in GDPR Enforcement in 2025? How Regulators Shifted from Policy Audits to Operational Accountability
By 2025, European regulators made a clear shift in approach: compliance is no longer judged by the wording of a privacy policy, but by the...
Read More