Maurice Wingfield

I build interactive digital experiences to foster deeper connections between people, culture, and their environment.

About

Object detection with OpenCV on Raspberry Pi Object Detection with OpenCV on Raspberry Pi, 2022

I'm a creative technologist with 15+ years of experience in web development. Self-taught, I lead with curiosity, a bias towards action, and a need to get my hands dirty learning by doing.

The result is a broad functional knowledge base, a couple pillars of domain expertise, and proficiency with AI tools to go deep on a per-project basis.

As a former board member at 934 Gallery, I understand cultural institutions from the inside: the budgets, the missions, the balance between public experience and operational reality. I've built interactive exhibits, originated youth programs, and managed the kind of cross-functional work that small teams require.

Projects

Fidget Feed

Android App

The problem: The average person picks up their phone 96 times a day. Most screen-time solutions fight human behavior head-on; blocking apps and hoping willpower handles the rest. After quitting Instagram cold turkey, I found myself picking up my phone throughout the day and just staring at the empty space the icon used to be. Just staring at the phone with nothing to do. That was the seed of the Fidget Feed user experience; a tappable, swipable UI playground with haptic feedback. An attempt to give my brain the dopamine it craved without all the distraction.

Product: Using Sensor Tower for market analysis and social media communities for user research, I validated the concept and developed a distinct name and design language. Fidget Feed is a two-word domain that describes the product in the simplest terms.

Design: Instead of stopping users dead in their tracks, I designed an alternate feed of low-cognitive-load fidget widgets. Figma explorations took the feed and onboarding through multiple iterations. The breakthrough: the best way to communicate the experience is to put live interactive widgets front and center in onboarding; better shown than described.

Tech: Built natively in Android Studio with coding agents as a core part of the workflow. Starting with GitHub Copilot Agent taught me the fundamentals of context engineering. Moving to Claude Code with the Opus 4.5 release was a step change in agent capability.

Where it is now: Entering beta, the final step before applying for the Google Play Store.

Android Kotlin Figma Market Analysis Product Design Context Engineering
Join the Beta

Clicky Wheel: a satisfying rotary fidget with haptic clicks.

Apophis Countdown

Web App

A personal project with the intention of turning public attention heavenward. 99942 Apophis is an asteroid roughly the size of the Eiffel Tower on course to pass between the Earth and Moon on April 13, 2029. Discovered in 2004, it went from a possible collision threat to confirmed safe, and passed back into obscurity. I suspect for a day or two it will be one of the most discussed stories on the planet. This project is a long-play for that moment of attention.

What I built: An educational countdown site with a 3D interactive orbital viewer, a public REST API for developers, and a content strategy built around structured data and SEO.

Three.js REST API SEO Structured Data
Visit Apophis Countdown

memory.audio

Web App

As I dove into the AI state of the art in early 2025, I built memory.audio to learn how to leverage LLMs to deliver real value to users. The app takes a single plain-English prompt and generates a 4–5 minute audio lesson on practically any subject.

Building in public on X, the app drew about a hundred users who provided feedback. A second iteration added rudimentary cognitive assessments; memory baselines users could track over time. A third added two-party conversation-style podcasts to complement the rote-repetition lessons. At that point, I realized I was recreating Google NotebookLM and decided to retire the project. The cognitive assessment angle still has legs; I expect to revisit it.

Tech: What started as scripts on my laptop became a fully deployed web app powered by OpenAI's completion API and Google Cloud Text-to-Speech. The project has since been taken offline.

OpenAI API GCP Text-to-Speech Prompt Engineering API Chaining
Currently offline
memory.audio cartridge

Technologies

Agentic Engineering

I build with coding agents as a core part of my workflow; not as a crutch, but as a multiplier that lets me move faster across unfamiliar stacks.

Claude Code Max subscriber (5x). My primary coding agent, used deeply in Android Studio for Fidget Feed development. Architectural decisions, debugging, and rapid prototyping across Kotlin, JS, and Python.
GitHub Copilot Agent Used inside VS Code with a range of models. Day-to-day code generation, refactoring, and exploration across web and API projects.

Languages

  • JavaScript
  • Python
  • Kotlin
  • HTML / CSS

Cloud & Infrastructure

  • AWS (Lambda, S3)
  • DigitalOcean
  • Azure
  • Linux / Nginx

Tools & Frameworks

  • Node.js
  • Android SDK
  • Three.js
  • OpenAI / GCP APIs

Hardware & Physical

  • Computer Vision
  • Touchdesigner
  • Projection Mapping
  • LED Panel Systems