← Blog
· 9 min read

Meta Says Bone Structure Analysis Isn't Facial Recognition. It Is.

It's facial recognition. They renamed it. The math didn't change.

In May 2026, Meta clarified its position on a category of features it's been quietly rolling out: AI-powered analysis of a user's bone structure and physical appearance — height, jaw shape, cheekbones, body proportions. According to Meta's official statements, this category is not facial recognition.

That position is wrong. Not as a matter of marketing or policy preference. As a matter of how the technology actually works.

Bone structure measurement is the literal foundation of how every modern face-recognition system identifies a face. Saying you're not doing facial recognition because you're measuring bone structure is like saying you're not driving because you're using the wheels.

The short version: "Facial recognition" as defined by every major US state biometric law, by GDPR, and by NIST's standard test methodology specifically includes the geometric measurement of facial features — which is what bone-structure analysis is. The label change is regulatory arbitrage, not a technical distinction.

What Meta Said

Meta's May 2026 statements drew a line: products that classify or identify users based on AI analysis of bone structure, height, posture, and similar physical attributes are not, in their view, facial recognition. They've used this framing to position several features within Ray-Ban Display and the next-gen Meta AI assistants — features that would be subject to BIPA, GDPR Article 9, and other biometric-specific consent regimes if classified as facial recognition.

The argument, as articulated, is roughly: facial recognition produces a unique identifier of an individual person; bone-structure analysis produces a description of physical attributes. Different output, different category.

That distinction collapses on the first technical look.

What Facial Recognition Actually Is, Mechanically

Modern face-recognition systems — FaceNet, ArcFace, the models used by Clearview, PimEyes, FaceCheck.ID, and basically every NIST-tested vendor — work the same way at the pixel-to-identity layer:

  1. Detect a face in the image.
  2. Locate facial landmarks — typically 68 to 478 specific points: pupil centers, nose tip, jawline points, mouth corners, cheekbone apex, eyebrow ridges. These are the points the bone underneath the skin pushes outward.
  3. Extract a feature vector — a list of numbers (commonly 128 to 512 floats) representing the geometric and textural relationships between those landmarks.
  4. Compare the vector to a database of vectors. Close matches = same identity.

The "facial recognition" part is the comparison step. The vector that gets compared is, fundamentally, a measurement of the face's underlying geometry — the shape of the skull and the soft tissue lying on it. There is no version of facial recognition that doesn't measure bone structure. That's what the model is doing.

Bone Structure Is the Whole Point

Here's the inconvenient detail: facial bones are more stable identifiers than soft tissue. Skin can age, fat can shift, expressions change, glasses get added and removed. The skull underneath barely moves across decades. Face-recognition models are explicitly trained to weight bone-structure-driven landmarks heavily because they're the most reliable signal of identity.

When Apple's Face ID was designed, the explicit selling point was that it captured a 3D depth map of the face's underlying structure — bone-driven topography — making it harder to spoof than 2D photo-based systems. Apple's marketing leaned hard on this. It worked because bone structure is identity. That's why the technology works at all.

Saying "we measure bone structure but it's not facial recognition" inverts the actual engineering. Bone structure is the most fundamental input to facial recognition. Skin tone and expression are secondary signals; bone geometry is the primary one.

Adding Height Doesn't Help

One of Meta's framings is that they're combining bone structure with other physical attributes — height, gait, posture — and that this combination is a different category of analysis ("soft biometrics") rather than facial recognition.

"Soft biometrics" is a real research term. It refers to physical attributes that are individually weak identifiers but useful in combination. Height alone won't identify you. Eye color alone won't identify you. Bone structure alone will — that's the asymmetry.

Combining a strong biometric (bone-structure-based facial geometry) with several weak ones (height, posture) doesn't reclassify the strong one. It's still facial recognition with extra metadata. NIST's own taxonomy treats this combination as "facial recognition + soft biometrics" — not as a separate category.

Why This Redefinition Exists

If a feature is classified as "facial recognition" under BIPA, the company has to obtain explicit written consent before processing each user's biometric data, has to publish a retention schedule, and is exposed to private right of action (statutory damages of $1,000 per violation, $5,000 if intentional). For a product with hundreds of millions of users, the math is brutal. BIPA settlements have produced the largest privacy payouts in US history: Snap $35M, TikTok $92M, Meta $650M.

If the same feature is classified as "appearance analysis" or "soft demographic categorization," several of those obligations might not attach — depending on how courts ultimately read the statute. Hence the rebrand.

This isn't unique to Meta. Across the industry, the trend in 2025–2026 has been to launch face-adjacent features under any name that doesn't include the words "facial recognition." Apple's "Vision Pro Personas." Google's "expressive avatars." Snap's "lens analysis." The technology is largely the same. The category labels are designed to evade the obligations the technology should trigger.

What Happens If This Sticks

Two things, both bad for anyone who cares about face privacy.

  1. Every face-search engine reclassifies overnight. If Meta succeeds in establishing "bone structure analysis" as a separate category from facial recognition, PimEyes, FaceCheck.ID, Lenso.ai, and Clearview AI all have a template to follow. The current legal framework — BIPA suits, GDPR fines, the EU AI Act's prohibitions — depends on the technology being called what it is. Rename it and the framework gets quieter.
  2. Consent regimes become optional. The current settlements that produced the BIPA payouts hinged on Meta calling Tag Suggestions a face-geometry product. If next-generation features are labeled "appearance analysis" instead, the consent obligations Meta has been operating under arguably weaken. Plaintiffs' lawyers will argue otherwise. Courts will eventually decide. In the meantime, the product ships.

The one bright spot: courts have, so far, been good about looking at what a system actually does rather than what its marketing department calls it. The Tag Suggestions settlement happened despite Meta arguing it wasn't facial recognition. The Illinois Supreme Court has consistently read BIPA to apply to function, not label.

Whether that stays true through this next round of litigation is the question regulators, courts, and the rest of the industry are watching.

Why This Matters For You

Whatever Meta decides to call its bone-structure analysis feature, the consequence for users is the same: more systems are extracting more measurements from more photos of you, on more services, with consent-flows designed to be skipped past. Reducing the number of public photos that any of these systems — face-recognition, "appearance analysis," whatever next year's category name is — can match against is one of the few defensive levers individuals actually have.

That's the layer face removal operates on. It doesn't change the law. It doesn't stop Meta from shipping the feature. It reduces the supply of public-web photos these systems can match your face against, regardless of what they're labeled.

Reduce the supply, regardless of the label.

FacePrivacy files monthly removal requests with PimEyes, Precheck.ai, FaceCheck.ID, Lenso.ai, Clearview AI, and other face-search engines on your behalf. We don't run a face-recognition product. We don't host your photos. We file the requests. $9.99/mo.

Start your removals →

Use code RENAMED at checkout for 15% off your first month.