Product Outlook 2025

Workshop

Multimodal AI Apps Designing for Apps That Can See, Hear, and Speak

This workshop focuses on designing with multimodal AI. Participants explore how to build experiences that integrate voice, vision, and language, gaining insights into emerging UX patterns and prototyping techniques for next-gen AI interfaces.

Last year I ran a workshop called Omni Apps Designing for Apps, that can See, Hear and Speak.

Here's a write up of how it went -> https://joshuacrowley.com/study/omnivore/omni-workshop.

This year I'd like to return with the same workshop and format.

I want participants to come with a laptop.

I'll share some interactive AI prototypes that they can use in session, get their reflections in session, and this year I can even contrast the performance of the prototypes with the year prior.

For participants, my goal is to get them thinking beyond the chat/text based experience that we all experience today.

I want them to see the opportunities that have arrived in the last year, and also get a sense of what some of the remaining challenges or shortcomings that exists today too.

Here's a figma board we used in the session which is a good overview of the structure/experience for

  • Outcome 1 for session 5
  • Outcome 2 for session 5
  • Outcome 3 for session 5
  • Outcome 4 for session 5
  • Outcome 5 for session 5
  • Joshua Crowley
    Joshua Crowley
    General Assembly