featured-image

The pursuit of building intelligent, superhuman machines is nothing new. One Jewish folktale from the early 1900s describes the creation of a , an inanimate humanoid, imbued with life by Rabbi Loew in Prague, to protect the local Jews from anti-Semitic attacks. The story’s consequences are predictable: the golem runs amok and is ultimately undone by its creator.

This tale is resonant of Mary Shelley’s , the modern-day tale that helped birth the science-fiction genre, and of the AI discourse in recent news cycles, which is growing ever more preoccupied with the dangers of rogue AI. Today, real-world AI is less autonomous and more an assistive technology. Since about 2009, a boom in technical advancements has been fueled by the voluminous data generated from our intensive use of connected devices and the internet, as well as the growing power of silicon chips.



In particular, this has led to the rise of a subtype of AI known as machine learning, and its descendent deep learning, methods of teaching computer software to spot statistical correlations in enormous pools of data—be they words, images, code or numbers. One way to spot patterns is to show AI models millions of labelled examples. This method requires humans to painstakingly label all this data so they can be analyzed by computers.

Without them, the algorithms that underpin self-driving cars or facial recognition remain blind. They cannot learn patterns. The algorithms built in this way now augment or stand in for .

Back to Fashion Page