<aside> 💡

Read Open Sourcing the Origin Stories: The ml5.js Model and Data Provenance Project by Ellen Nickles and reflect on the the following questions:

</aside>

What questions do you still have about the model and the associated data? Are there elements you would propose including in the biography?

I don’t have much questions on the model per say, but I do want to question something about the data. In the reading, it states that they strive to only use data that has consented to be used for the purpose of training but it also has came to my attention that some datas aren’t consented yet still used. How can we further improve this so that only data that is consented from the ORIGINAL source be used? How can we ensure that the one that is giving us the okay the original creator, or the person that is the face of the image?

How does understanding the provenance of the model and its data inform your creative process?

Having an understanding of the data and the provenance of the model, it makes me want to explore everything more. However, sometime models take a long time to load and if I were to try out everything, it would take a while so I would hope to watch video demonstrations of the different models. Similar for data, it’s straining on the eyes to look at a bunch of words so having images would be helpful too, and categorization for the images as well so we can find things easily. It informs my creative process as I have more places to look in for inspirations and ideas rather than looking blindly.

<aside> 💡

In your blog post, include visual documentation such as a recorded screen capture / video / GIFs of your sketch.

</aside>

https://editor.p5js.org/Yigl00/sketches/5kFCsRcNL

For this particular sketch, I was reminded of my previous project from Nature of Code (taught by professor Moon) using ml5, and I thought why not make a super simplified version but this time we use the face and hands and not a marionette anymore. During the process, I thought about it a lot more and I decided that would still be very hard to do.

Looking through the examples, I first tried out with the drawing with nose sample but I didn’t like it quite much and then I tried out the trailing sample. The trailing sample was fun so I started off with it but then I wanted to use lifespan in my code, which kind of made me thought it’s like adding the drawing with the trailing. Due to using lifespan and not limiting the particle count, the code got quiet choppy. Thus I added the following codes:

let detectionInterval = 200; // ms (5 fps)
let lastDetection = 0;
frameRate(5); // 5 FPS

https://drive.google.com/file/d/18T7olDv8g9zEW_v1VR4GeDxpccIYbiL9/view?usp=drive_web