We're a data science podcast, focused on the latest & greatest of the DS ecosystem, sprinkled in with our musings & data science expertise. With topics ranging from ethical AI and transparency to robot pets, our hosts, Triveni & Will, are here to keep you up to date on the latest trends, news, and big convos in data. Click hereto to listen to the Banana Data Podcast.
If you're looking to keep the knowledge up, be sure to also subscribe to our weekly Banana Data Newsletter! Register here
Welcome to the Banana Data Podcast! For our inaugural episode, our hosts Triveni and Will challenge the idea that the “best model is the most efficient,” the current ethical gaps of data collection, and how methods like federated learning can help keep private user data, well, private.
On episode two of the podcast, Triveni and Will look at how digital assistants may perpetuate biased data, how multi-armed bandits can build a top-notch recommendation system (and win over Triveni’s heart), and their interview with Mark Buckler, PhD candidate at Cornell and author of the article, “How to Make Bad Deep Learning Hardware” on why understanding hardware may be the key to building your best models yet.
This week we’re diving into some deeper impacts of AI’s successes and failures- asking where responsibility lies for an algorithm’s failures, and the endless benefits of accessibility and responsibility that come with AI implemented in healthcare. We’re also taking a deep dive into Epsilon greedy multi-armed bandits and how we can more accurately describe our successes (and our failures) in AI.
On episode 4 of Banana Data, we’re taking a look at how our data is changing. With models in the wild skewing our future data sets, the impending shift to Python 3.0, and navigating a public distrust of Machine Learning, Triveni and Will talk through how our current decisions in AI will heavily influence its future. They’ll also take a stab at explaining GANs - in English.
As AI continues its embedding into our lives, humans will have to start evaluating how we as humans interact and build relationships with our artificial intelligence applications. On this episode of Banana Data, we’re taking a look at what AI means for the individual - from emergent Robopet friendships to AI art as a medium, to what emotional intelligence looks like in AI - and how we can produce better, more human AI systems.
When we release our AI into the world, its impact extends far beyond the business and tech we’re working on. On this episode, we’re diving into the consequences of AI on consumers, housing, and the environment through the lens of GDPR, the supposed “AI job apocalypse” and some controversial takes on models’ carbon emissions.
Accessibility, by definition, is about making tasks more achievable. In episode 7 of the podcast, Triveni and Will explore how AI is shaping our world to become more accessible, and how we as data scientists can help it get there, diving into Salesforce’s new “unstructured” querying tool, the various physical manifestations of AI, and even questioning some of their previous takes on ethics. They’ll also walk us through the contributions of BERT to the NLP space, and the how and why it’s been so revolutionary.
This episode, Triveni and Will tackle the value, ethics, and methods for good labeled data, while also weighing the need for model interpretability and the possibility of an impending AI winter. Triveni will also take us through a step-by-step of the decisions made by a Random Forest algorith
In our second-to-last episode of the season, Triveni and Will explore the data world’s shifting attitude toward standalone data visualizations (are they dying? Who are they for?), how to respond to global AI practices (what are global AI standards? How do different countries vary in their AI approaches?), and the feasibility of an AI audit. We’ll also see how Spark fits into the infrastructure of our data science systems.
For our season 1 finale, Triveni and Will give their two cents on the most important aspects of a data science practice. From intentional data to getting outside perspectives, they walk us through how to build not only a scalable AI practice, but one that is responsible, ethical, and interpretable.