top of page

A Brief Intro

Vertucon is a data-centric platform powered by programmatic labeling. It focuses on speeding up AI development using weak supervision to train models, analyze, and iterate on the data.

The main end users are data scientists, subject matter experts, and machine learning practitioners.

Project Overview

Timeline: 3 sprints
Team: Aparna (PM), Braden (Research), Brad (Research), Ryan (Research), Naveen (FE), Vaibhav (FE) Angela (Product Designer)
My role: Product Designer

User Problem

With the release of ChatGPT, foundation models and LLMs associated with them became a new entry point to enhancing the user’s goal of, potentially, better, faster, and more efficient AI development.

Problem was, the field was so new to the public there was no guidance or precedent to show us how to implement, let alone present an easy way to harness this new potential.

Solution

Bootstrap foundation models into the platform at three areas in the user’s workflow to kickstart, enhance, and refine the iterative loop of training models.

How we got there:

Catch up, ramp up + GO!
• Due to the extremely tight timeline and pressure from the GTM team, I had to ramp up quickly in order to execute

• This project was initiated by the Research team and was publicized to the public that the platform’s pivot towards integrating foundation models/LLMs as a key piece towards accelerated AI development.

• Being completely unfamiliar with what the Research team was working on, I and the other designer had to ramp up quickly to understand how to implement the UX to a new feature in an already complex platform.

• We collaborated with a member of the Research and set up daily updates and Q&A sessions in order to understand how we’d be implementing foundation models within Vertucon.

• With this early collaboration, we were able to determine the UX entry points. From there, we iterated quickly in low-fidelity to maximize time and effort across cross-functional teams.

Collaborate, collaborate, collaborate
• Collaboration and open and honest communication throughout the whole project is key to any xf success.

• We had to move fast and move fast we did.

• Not only did we move fast, we iterated quickly to gather valuable insight and feedback from various stakeholders and core members of the project.

• The deadline for the new platform feature reveal was critical and a non-negotiable as we had press, certain patent obligations to meet, and getting an MVP to the market sooner than the competition is essential in the start-up world. But we needed to make sure that not only did our designs look and feel good but it made sense. And to this, our early iterations resulted in complete tear-downs and start-from-square-one situations.

Design > Validate > Iterate
• Design fast, fail fast, and learn fast. After that, rebuild and restart the process until validation hits milestones.

• We built a concept prototype and had our internal customers do a talk-a-loud walk through several times throughout the project and what we received from them helped us iterate until we achieved our task completion goals.

• What we achieved at this stage was 100% task completion of completely new UX design for Warm Start, Prompt Builder, and Fine-tuning. All of which had to be integrated with an already existing programmatic labeling development and iteration loop without confusing or slowing down our existing external customers.

Ship It
• The world awaits as we reveal a leap forward in AI development.

• We ended up shipping new features featuring LLMs as part of the updated, new process of the AI development on time to excitement and awe from our external customers.

Warm Start

New to Vertucon? Uploaded data and don't know what to do or how to start your AI development process? No problem! When you enter your application for the first time, you're greeted with a prompt that allows you to utilize LLMs to kickstart everything.

Prompting

Prompting with the integration of LLMs

Train

Once you're done prompting and you're happy with some of the results, then let's get training your data!

Fine-tuning

You can fine-tune at various steps because you want to iterate on your labels so that your models progress with each trained model and that can be assisted with LLMs via fine-tuning.

Impact & Reflections

Successful launch
• Because everyone on board were given a fixed goal and vision for what this new, exciting feature would entail, the release of the features as a public beta was met with positive press and interest from existing customers as well as ICP and non-ICP prospect.

Innovation catalyst
• The seamless integration of LLMs into the platform has transformed the programmatic labeling workflow, introducing cutting-edge technology to our customers. The teams using our platform now operate at a higher level of productivity, positioning us as industry leaders harnessing future thinking language capabilities.

Tech-empowered team
• Empowering our customers through advanced language tools, the successful integration of LLMs has heightened individual and collective efficiency. This technological boost reflects in our customers’ enthusiasm, enabling us for sustained success in the dynamic business landscape.

The case for validation
• Even though the launch of the new features were a complete success and continues to drive interest and innovation from external prospects, it comes with a few glaring holes in our process that could’ve led to deeper thinking and adjusting how LLMs were integrated into the platform.

Navigating uncharted territories
• The integration of LLMs into our platform was a venture into uncharted territories, and looking back, it's clear that the risk was well worth the reward. This project pushed the boundaries of what we thought possible, allowing us to navigate linguistic landscapes with precision. The reflections on this journey highlight not only the technological strides made but also the resilience and adaptability of our team.

Catalyst for evolution
• Integrating LLMs into our platform has acted as a catalyst for our evolution. The project was not just about incorporating advanced technology but about fundamentally changing the way we approach language-based tasks. As we reflect on this integration, it's evident that the project has not only refined our platform but has also sparked a continuous cycle of innovation and improvement, propelling us forward into a new era of linguistic excellence.

Dark Mode

And here are some Dark Mode mocks that I create on my free time.

A Case Study of LLMs

A Case Study of LLMs

It's complex machine learning stuff

bottom of page