I’ve always been driven by the challenge of using the computer as a medium for human interaction and expression. My career began with the tangible, real-time signals of touch and sound, learning to control robotic and haptic systems for audio interaction. For the last decade, I’ve applied that same drive to machine learning, exploring how computers can perceive the world through vision, and co-create new ideas with generative models. Today, my work is focused on the engineering craft of turning these powerful concepts into robust, production-ready software.

I’ve lived in 4 countries in the last 10 years, but my current residence is in Utrecht, Netherlands.

Latest work:

  • I recently finished my work leading development at DAISYS, a text-to-speech company focused on novel voice generation and highly controllable speech synthesis. Since 2022, I lead an engineering team that improved their existing TTS models with respect to quality (intonation, correctness), controllability, and scalability; developed a web-facing product and API; and a semi-automatic data acquisition pipeline to transform in-the-wild audio into speech datasets ready for training.
  • For NavInfo Europe, development of scenario extraction methods for highway driving using computer vision. Road reconstruction, vehicle detection, and event identification. The work is a mix of modern ML and classic CV methods, using technologies such as PyTorch, OpenCV, and Scikit-Learn. A major contribution was estimation of road topology from dashcam video feeding a birds-eye view projection into a BiLSTM-CRF model trained on highway map data.

Previous highlights:

  • Measurement system for fingerpad skin deformation under normal loading (for MSLab.
  • Implementation and optimization of a neural network-driven real time audio filter by fast convolution (for Enosis VR), work that is now part of an HP product.
  • Design and implementation of real-time rendering methods for ultrasound haptics (for MSLab).
  • Encoding and real time synthesis of physically-based audio using neural networks in Sounderfeit (personal project while at Inria Chile)
  • Development of new physics back-end using Inria’s Siconos for the Gazebo robotics simulator — and some major enhancements to Siconos mechanics in the process, including support for friction and detents within joint constraints (for Inria Chile).

Some older work, projects in which I was a prime mover:

  • DIMPLE is a 3D haptic environment intended to be used in conjunction with creativity tools such as PureData, Max/MSP, SuperCollider, etc.
  • libmapper is a library for mapping Open Sound Control messages between devices on a local network in a decentralized manner. (Co-developed originally with Joseph Malloch, work continues by the IDMIL team today!)
  • Music systems, LoopDub is among several systems I developed over the years to play live techno music, all used on stage at some point.
  • Investigated methods for adapting physical bowed string models for force feedback haptics, including an evaluation of velocity estimators for friction-driven haptics (PhD work).

Open source:

I am the maintainer of the popular liblo library for Open Sound Control, a contributor for the RtMidi/RtAudio libraries with Gary Scavone, and I also maintain a Debian package on the Science team, namely Siconos.

My CV can be found here (PDF), please get in touch if you’ve got some great ideas!