On the Future of Money Podcast, Heather Schlegel interviews Autumn Rooney, the founder of the Echo Park Time Bank. While I've heard of time swapping, time banks themselves are a new concept. In 42 minutes, they cover several interesting points on the topic, including:
@3:50 - How is the future changing? Ms. Rooney points out today's unsustainable methods and how they are bringing about tomorrow's changes.
@4:55 - An explanation of time banks and how time values are traded.
@15:20 - How do new users experience this service?
@23:20 - How time banks can mesh with the cash economy.
The Future of Money podcast hosted Johnny Dilley, the Venture Associate for Pantera Capital, to discuss where Bitcoin is today and where it might go in the future. Over 45 minutes, Johnny introduces several concepts on the digital currency technology.
There is a lot of speculation on the technological singularity - the point at which artificial intelligence has progressed beyond human intelligence. Between AI and advanced engineering, humans could quickly become extraneous to modern society, especially from the viewpoint of those who own the robots and see extra humans as needful beings who return little to the planet.
#255 of This Week in Law's weekly discussion focuses on the various issues we may face when regulating robots in the future. The discussion revolves around James Barrat's book Our Final Invention: Artificial Intelligence and the End of the Human Era. Based on Barrat's writings, they discuss how a super intelligence might come to view its human 'captors' - humans that used to be the smartest creatures on the planet and now occupy a lower rung of intelligence as we are surpassed by our creation.
Topics include building sympathy into machines to instill ethics, robotic personhood, and a range of issues we might encounter after the Singularity occurs. The discussion of robotics lasts around an hour and is a very deep discussion.
Over the weekend, I was listening to This Week in Law's episode from January 31: Deep blue vs The Universe. They started the episode on the topic of the Singularity and James Miller and Stan Liebowitz discussed how we might know the Singularity has been achieved. Each had a different view on how far into the future this event might occur and what capabilities that intelligence should possess to be considered "intelligent." Both viewpoints were informative, but I found Dr. Liebowitz' statements extremely interesting in how he felt a contrast between an artificial intelligence and a human should be used to identify when the artificial process has reached or surpassed a comparable human capability. Dr Liebowitz shared this thought:
@24:25 - I see things computers can do that people can't, but things people can do that computers can't is what we're talking about.