6:30 Tonight!! “On A.I. and Cities: Platform Design, Algorithmic Perception, and Urban Geopolitics”

Spiegel Wilks Lecture Featuring Benjamin H. Bratton: “On A.I. and Cities: Platform Design, Algorithmic Perception, and Urban Geopolitics”

https://www.design.upenn.edu/fine-arts/undergraduate/events/spiegel-wilke-residencylecture-series-benjamin-h-bratton

Seems like it will touch on a lot of themes we’ve been discussing!

Meet Milo

Found another video…

This seems like a very cool application of Robotics: https://www.youtube.com/watch?v=RsDdC88viDI

It presents another model of visual learning that seems to work really well for kids who have autism. I think it’s interesting to consider if the “Uncanny Valley‘ will be a problem here. My guess is it won’t be! The robot seems to be designed to demonstrate subtle caricatures of emotions, not pass for an actual human. The gap between what reality looks like, and how the robot imitates it “poorly” seems like it will be beneficial in this case.

Excerpt from wikipedia:

Mori’s original hypothesis states that as the appearance of a robot is made more human, some observers’ emotional response to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong revulsion. However, as the robot’s appearance continues to become less distinguishable from that of a being, the emotional response becomes positive once again and approaches human-to-human empathy levels.[12]

This area of repulsive response aroused by a robot with appearance and motion between a “barely human” and “fully human” entity is called the uncanny valley. The name captures the idea that an almost human-looking robot will seem overly “strange” to some human beings, will produce a feeling of uncanniness, and will thus fail to evoke the empathic response required for productive human-robot interaction.[12]