The AI Brain Behind Ingredient Transparency

 

Overview:

This is a continuation of my ingredient interpreter project, sharing recent updates, refinements, and UX-focused design shifts. Read the original post.

Since publishing that, I’ve focused on:

  • Building out Figma components to support modular delivery of ingredient explanations

  • Structuring the output for future integration with LLMs and vertical GenAI assistants

  • Refining UX copy to balance scientific accuracy with user clarity

Keon, our engineer, has been prototyping a system that blends large language models with structured schemas and verified ingredient data. The backend interprets dense INCI names and returns trustworthy insights. No fear-mongering, no fluff.


 

This post builds on the original project overview. You can read the full context here. → Original case study

 
 
 

Defining the Output Format

To make the system usable for both consumers and professionals, we defined a structured schema for ingredient insights.

We focused on a format that balances technical depth with readability.

 
 

MVP Architecture: Keon’s Local Prototype

Keon spun up a local dev stack using Ollama, which enables us to prototype quickly and compare LLM output across prompt types. Ollama acts as the runtime for the language models.

On the frontend, we’re using a lightweight React app where users can input ingredient names. These queries hit a Gin-powered backend API, which connects to the Ollama LLM and returns a clean JSON response.

Example Output

React Frontend → Gin REST API → Ollama LLM Engine

 
 
 
 

Sourcing Data

While Keon benchmarks model performance on classification and clarity, I’ve been focused on the data pipeline. Defining how data enters the system and which sources we trust.

We’ve prioritized sources that are both scientifically rigorous and user-readable. The bigger vision: combine strong UX with scientific rigor, and lay the groundwork for features like versioning, trust indicators, and contextual filtering.

 

This MVP sets the stage for enterprise-grade features like versioning, observability, and contextual filtering, by combining usability with scientific rigor.

Here’s a screenshot of the early frontend where it all comes together.

 
 
 

Phases:

Phase 1: Testing + Iteration (Current Phase)

  • Integrate with LLM or retrieval-augmented model

  • Conduct usability testing on language clarity

  • Align outputs with accessibility and enterprise design systems

Phase 2: Testing + Iteration

  • Begin user testing with real ingredient lists

  • Test upload and paste workflows

  • Build a feedback loop for trust signals to guide future improvements

Future Plans

  • Layer in user profiles (e.g., acne-prone, sensitive skin, rosacea)

  • Enable “smart” questions (e.g., “Is this pregnancy safe?”)

  • Develop a Chrome extension for ingredient popovers while browsing.

Next
Next

Building a Science-First Ingredient Interpreter