James Parker: So thanks so much for joining us, Santiago. Could you maybe begin by introducing yourself just briefly, however feels right to you?

Santiago Rentiera: Well, yeah, I'm from Mexico, Mexico City, and I'm living in one of the most isolated capital cities in the world. And I also think it's a great place to be in Australia, despite of the time differences and all the logistics that imply flying in and out. But I got a scholarship at the University of Western Australia. This is an ARC scholarship on a project that is researching the cultures of automation. My research is mainly concerned with how automation is impacting the way that we manipulate and standardize representations of non-human sound.

Santiago Rentiera: So mainly I am using as a case study the Australian magpie sounds because I think it's a super cool bird that can do so much things that I thought like birds couldn't do before. So they are very social. They are also complex vocalizers. They can mimic. And so this inspiration of biology is a guide in my intellectual journey. Besides that, I have also been researching the intellectual history of listening and how this can be mapped to the scientific methods in bioacoustics.

James Parker: So, I mean, that's all sounds amazing. I kind of wanted to, I'm tempted to just say, let's talk about magpies for a bit, but we'll get to that. We'll get to that. I mean, what's your, like, can we, can you tell us a little bit about how you end up in that place, you know, working on this? Like what's your background intellectually, institutionally? I mean, are you, you know, an artist first and foremost, or, you know, a computer scientist?

James Parker: What's the sort of the training or the institutional formation that like ends you up here rather than somewhere else?

Santiago Rentiera: Yeah, I think it's a little bit funny because I'm like this kind of multiple hat type person or maybe a no hat. So I think, well, my starting point was music because my bachelor's degree is in music. And also due to the influence of my family, because I come from a family of musicians, formerly trained musicians. So I think I have like this musical enrichment in my childhood. So that influenced my artistic perspective.

Santiago Rentiera: But I think curiously as my parents wanted me to be like kind of this concert musician, I ended up in the sciences and now doing something that is more in between. So I did music and production engineering and then moved to the computer science field to a master's in computational science. in me of listening to birds. One of them is from the UCLA and the other one now rests in peace. But I am very grateful to his drive and impulse in the study of birds.

Santiago Rentiera: So I think that's one of the main reasons I ended up doing birds because before my master's, I was thinking about doing something more like kind of music interface design or this creative computing field. And with the master's I discovered that there was this field of the use of algorithmic techniques and sonic methods to study the communication of birds.

Santiago Rentiera: And yeah, that's where I came up with this model with the artificial neural network model, which is a few short model capable of dealing with various small data sets, which is one of the common problems with some of the species that are not, don't have labels or are very few recordings that are actually labeled, have labels. So yeah, so that's how I ended up doing the cross-disciplinary bioacoustics and computer science.

James Parker: Could I just rewind back a little bit to that, your supervisors or that context? So was it that you arrived in a computer science department with a musical background and you just so happened to find that in this computer science department, there were already people who are working effectively on machine listening, whether or not they were calling it that. Is it, is it, there was a, yeah, that was sort of something that was sort of sitting there ready and waiting and you were kind of inducted into? Is that what I've understood? Is that right?

Santiago Rentiera: Well, I think I simplified it a bit because, yeah, actually to get into that master's, it was kind of a bit of a journey in kind of talking to people because I was coming from music. So when I kind of enter into the computer science, I was taking us or auditing classes during my bachelor's in different topics of computer science. So then I kind of had to gain the trust of the people, you know, I know what I'm doing. So that's how I met the computer science researchers Tecnologico de Monterrey. This is the university. I did my master's in Mexico.

Santiago Rentiera: And one of them invited me to apply to this master's program funded by the ARC of Mexico or the equivalent. And there, two of my supervisors had this long ongoing project connected to the UCLA research. So the UCLA researcher is Charles Taylor. So he started this group of Charles lab, the study of birdsong and using sonic methods to develop techniques to map and understand sequences.

Santiago Rentiera: And then Edgar Vallejo, who was my supervisory in the master's was the only one I think at that time in the faculty that had this project crossing biology, acoustics and machine learning.

Santiago Rentiera: So, yeah, it was pretty unique. I wasn't even expecting that, because you apply to the program but you don't have, you don't choose your supervisor until you kind of submit your proposal, kind of like six months after. So, yeah, and I didn't know, I didn't know Edgar before.

James Parker: So what sort of year was that?

Santiago Rentiera: I think I finished master's around 2019..

James Parker: Okay, so this is, this is all post sort of deep, deep learning, right? You know, this is like quite an important period in the history of machine listening where suddenly everybody's transitioning across to machine learning techniques sort of at en masse. That's precisely the moment you're doing it, is that right?

Santiago Rentiera: Yeah, I think the well, I wasn't very aware of the field itself because everything was machine vision and even people in the department treated classification of of bird sounds as a problem of machine vision, because they pretty much turned the sound into spectrograms and that was like, oh well, you just have to just read it as an image. They actually didn't, didn't have listening skills, so it wasn't listening at all. For me it was like just doing this data analytics.

Santiago Rentiera: I think the moment when I began kind of studying that historical progression, when it became listening or when it stopped being listening, was until I kind of wrote my PhD proposal where I had to actually make that argument that there's a gap in knowledge and there's this transition from sonic methods and notations to spectrograms which no longer require, you know, certain listening skills.