In this workshop, we’ll show you how to turn a regular video or a short text prompt into 3D character animation — no motion capture suits, no special equipment, and no cleanup needed.
You’ll see how fast and easy it is to bring motion into your animation or game, and how to use our tools with Unreal Engine or your favorite 3D software. Everything runs right in your browser, and you can try it yourself during the session!
We launched our new single-view 3D motion capture product earlier this year, and we’re now adding even more features for animators and game developers.
During this workshop, you will learn how to:
- Capture motion from any video in seconds
- Animate avatars using just text prompts
- Retarget motion to your custom 3D characters
- Export results to Unreal Engine, Blender, Maya and more
Our workshop is perfect for 3D artists, animators, indie devs, and creators with a passion for expressive character motion.
🎮 Follow along live — just bring your laptop if you want to try it out!
Meshcapade Plugins & Docs: meshcapade.me/plugins
Meshcapade Unreal Engine Tutorial: Watch on YouTube (https://youtu.be/ie9HWqy90JI)
MoCapade Demo Reel (Motion from Video): Watch on YouTube (https://youtu.be/XIvINWKdOEY)
Hair & Face Capture from Video → Watch preview (https://youtu.be/BOzfY7AvVEc)
Motion Generation → Watch preview (https://youtu.be/62SDxV6Mkb8)
Nathan Bajandas is a 3D Graphics Engineer at Meshcapade, where he focuses on integrating Meshcapade’s cutting-edge motion and avatar technologies into platforms like Unreal Engine and Blender. Since joining in 2022, he has also contributed to avatar creation and synthetic data workflows, helping bridge the gap between real-world input and digital animation.
Before Meshcapade, Nathan developed VR, desktop, and mobile training applications for pilots, mechanics, and scuba divers at Vertex Solutions. His earlier roles include work in visualization graphics, game development, and digital art at institutions such as the University of Illinois and Paradigm Entertainment.
Nathan holds a Master’s degree in Visualization and a Bachelor’s in Computer Science from Texas A&M University — a combination that reflects his passion for both technical precision and creative expression.
Naureen Mahmood is the CEO and co-founder of Meshcapade, an award-winning tech start-up based in Europe's largest AI ecosystem: Cyber Valley. Mahmood received her B.Sc. from Lahore University of Management Sciences, Pakistan in 2006. She received a Fullbright Scholarship for her graduate studies, she completed her M.Sc. at Texas A&M's Visualization Department in 2011. From 2012 to 2018 she worked at the Max Planck Institute for Intelligent Systems in Tübingen, where she has been a key author of several academic publications in computer graphics, machine learning and computer vision. She is the inventor on a number of patents related to 3D human body models and 3D human motion. Naureen has been a speaker at the world's foremost confrences on computer vision and graphics, including CVPR, ICCV and SIGGRAPH. In 2024, her work on 3D human motion received the esteemed Test-of-Time award from ACM SIGGRAPH.
At Meshcapade she is applying her work on human understanding for training AI avatars to see, understand and interact just like real people. Meshcapade has built the foundation for all applications to transport the physical reality of people — talking, moving, interacting with the world — into the digital space. With this, Meshcapade is now developing the world's first and only 3D human behavior engine to power human-like interactions and behaviors for AI characters, virtual beings and humanoid robots.
Talha Zaman is the Co-Founder and Chief Technology Officer of Meshcapade, a company specializing in 3D human modeling and motion capture technologies. With a strong background in software engineering, Talha has previously contributed to organizations such as Google and Kyruus, and has worked on transitioning research into production-quality software at the Max Planck Institute for Intelligent Systems.
At Meshcapade, Talha leads the development of tools that enable creators to generate lifelike 3D human avatars and animations from various data sources, including video and text. His work supports applications across gaming, animation, and virtual production, aiming to make high-quality digital human creation accessible to a broad range of users.