Data Science Decoded – Details, episodes & analysis

Podcast details

Technical and general information from the podcast's RSS feed.

Data Science Decoded

Data Science Decoded

Mike E

Science

Frequency: 1 episode/13d. Total Eps: 31

Spotify for Podcasters
We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs
Site
RSS
Apple

Recent rankings

Latest chart positions across Apple Podcasts and Spotify rankings.

Apple Podcasts

  • 🇹🇩 Canada - mathematics

    14/08/2025
    #18
  • 🇬🇧 Great Britain - mathematics

    14/08/2025
    #6
  • đŸ‡©đŸ‡Ș Germany - mathematics

    14/08/2025
    #17
  • đŸ‡ș🇾 USA - mathematics

    14/08/2025
    #31
  • 🇹🇩 Canada - mathematics

    13/08/2025
    #14
  • 🇬🇧 Great Britain - mathematics

    13/08/2025
    #6
  • đŸ‡©đŸ‡Ș Germany - mathematics

    13/08/2025
    #15
  • đŸ‡ș🇾 USA - mathematics

    13/08/2025
    #28
  • 🇹🇩 Canada - mathematics

    12/08/2025
    #10
  • 🇬🇧 Great Britain - mathematics

    12/08/2025
    #6

Spotify

    No recent rankings available



RSS feed quality and score

Technical evaluation of the podcast's RSS feed quality and structure.

See all
RSS feed quality
To improve

Score global : 63%


Publication history

Monthly episode publishing history over the past years.

Episodes published by month in

Latest published episodes

Recent episodes with titles, durations, and descriptions.

See all

Data Science #7 - "The use of multiple measurements in taxonomic problems." (1936), Fisher RA

Season 1 · Episode 7

lundi 12 aoĂ»t 2024 ‱ Duration 47:30

This paper introduced linear discriminant analysis(LDA), a statistical technique that revolutionized classification in biology and beyond.


Fisher demonstrated how to use multiple measurements to distinguish between different species of iris flowers, laying the foundation for modern multivariate statistics.


His work showed that combining several characteristics could provide more accurate classification than relying on any single trait.


This paper not only solved a practical problem in botany but also opened up new avenues for statistical analysis across various fields.


Fisher's method became a cornerstone of pattern recognition and machine learning, influencing diverse areas from medical diagnostics to AI.


The iris dataset he used, now known as the "Fisher iris" or "Anderson iris" dataset, remains a popular example in data science education and research.


Data Science #6 -"On the problem of the most efficient tests of statistical hypotheses." (1933) N&P

Season 1 · Episode 6

mercredi 7 aoĂ»t 2024 ‱ Duration 56:32

This paper is considered one of the foundational works in modern statistical hypothesis testing.

Key insights and influences:

  1. Neyman-Pearson Lemma: The paper introduced the Neyman-Pearson Lemma, which provides a method for finding the most powerful test for a simple hypothesis against a simple alternative.
  2. Type I and Type II errors: It formalized the concepts of Type I (false positive) and Type II (false negative) errors in hypothesis testing.
  3. Power of a test: The paper introduced the concept of the power of a statistical test, which is the probability of correctly rejecting a false null hypothesis.
  4. Likelihood ratio tests: It laid the groundwork for likelihood ratio tests, which are widely used in modern statistics.
  5. Optimal testing: The paper provided a framework for finding optimal statistical tests, balancing the tradeoff between Type I and Type II errors.

These concepts have had a profound influence on modern statistical theory and practice, forming the basis of much of classical hypothesis testing used today in various fields of science and research.

Data Science #5 - "A Mathematical Theory of Communication" (1948), Shannon, C. E. Part - 3

Season 1 · Episode 5

mardi 30 juillet 2024 ‱ Duration 01:07:22

Shannon, Claude Elwood. "A mathematical theory of communication." The Bell system technical journal 27.3 (1948): 379-423. Part 3/3. The paper fundamentally reshapes how we understand communication. The paper introduces a formal framework for analyzing communication systems, addressing the transmission of information with and without noise. Key concepts include the definition of information entropy, the logarithmic measure of information, and the capacity of communication channels. In the third part we go over the Fundamental theorem of the noisy and noiseless channel! Full breakdown of the paper with math and python code is at our website: https://datasciencedecodedpodcast.com/

Data Science #4 - "A Mathematical Theory of Communication" (1948), Shannon, C. E. Part - 2

Season 1 · Episode 4

dimanche 21 juillet 2024 ‱ Duration 41:11

Shannon, Claude Elwood. "A mathematical theory of communication." The Bell system technical journal 27.3 (1948): 379-423. Part 2/3. The paper fundamentally reshapes how we understand communication. The paper introduces a formal framework for analyzing communication systems, addressing the transmission of information with and without noise. Key concepts include the definition of information entropy, the logarithmic measure of information, and the capacity of communication channels. Shannon demonstrates that information can be efficiently encoded and decoded to maximize the transmission rate while minimizing errors introduced by noise. This work is pivotal today as it underpins digital communication technologies, from data compression to error correction in modern telecommunication systems. Full breakdown of the paper with math and python code is at our website: https://datasciencedecodedpodcast.com... This is the second part out of 3, as the paper is quite long!

Data Science #3 - "A Mathematical Theory of Communication" (1948), Shannon, C. E. Part - 1

mardi 16 juillet 2024 ‱ Duration 41:04

Shannon, Claude Elwood. "A mathematical theory of communication." The Bell system technical journal 27.3 (1948): 379-423. Part 1/3. The paper fundamentally reshapes how we understand communication. The paper introduces a formal framework for analyzing communication systems, addressing the transmission of information with and without noise. Key concepts include the definition of information entropy, the logarithmic measure of information, and the capacity of communication channels. Shannon demonstrates that information can be efficiently encoded and decoded to maximize the transmission rate while minimizing errors introduced by noise. This work is pivotal today as it underpins digital communication technologies, from data compression to error correction in modern telecommunication systems. Full breakdown of the paper with math and python code is at our website: https://datasciencedecodedpodcast.com... This is the first part out of 3, as the paper is quite long!

Data Science #2 - "Application of the Logistic Function to Bio-Assays" (1944), Berkson Joseph

Season 1 · Episode 2

dimanche 7 juillet 2024 ‱ Duration 01:01:29

  "Application of the Logistic Function to Bio-Assays" (1944), Berkson Joseph

It gained further prominence in the 20th century through applications in various fields, including biology and bio-assay. Joseph Berkson's 1944 paper, 'Application of the Logistic Function to Bio-Assay,' was pivotal in popularizing its use for estimating drug potency.


Berkson argued that the logistic function was a more statistically manageable and theoretically sound alternative to the probit function, which assumed that individual susceptibilities to a drug follow a normal distribution.

 

The logistic function's ability to be easily linearized via the logit transformation simplifies parameter estimation, making it an attractive choice for analyzing dose-response data.

Data Science #1 - Fisher RA. "On the mathematical foundations of theoretical statistics"(1922)

Season 1 · Episode 1

dimanche 7 juillet 2024 ‱ Duration 01:16:59

We discuss Ronald A. Fisher's paper "On the Mathematical Foundations of Theoretical Statistics"(1922), which profoundly shaped the field of statistics by establishing key concepts such as maximum likelihood estimation, which is crucial for parameter estimation in statistical models.

Data Science #9 - The Unreasonable Effectiveness of Mathematics in Natural Sciences, Eugene Wigner

Season 1 · Episode 9

mardi 10 septembre 2024 ‱ Duration 01:24:32

In this special episode, Daniel Aronovich joins forces with the 632 nm podcast. In this timeless paper Wigner reflects on how mathematical concepts, often developed independently of any concern for the physical world, turn out to be remarkably effective in describing natural phenomena.


This effectiveness is "unreasonable" because there is no clear reason why abstract mathematical constructs should align so well with the laws governing the universe. Full paper is at our website:

https://datasciencedecodedpodcast.com/episode-9-the-unreasonable-effectiveness-of-mathematics-in-natural-sciences-eugene-wigner-1960

Data Science #8 - The Turing test by Turing Alan "Computing machinery and intelligence" Mind (1950)

Season 1 · Episode 8

mercredi 4 septembre 2024 ‱ Duration 54:57

This paper is a foundational text in the field of artificial intelligence (AI) and explores the question: "Can machines think?" Turing introduces what is now known as the "Turing Test" as a way to operationalize this question, he called it the imitation game. Are there imaginable digital computers that could perform well in the imitation game? The imitation game involves an interrogator trying to distinguish between a human and a machine based on their responses to various questions. Turing argues that if a machine could perform well enough in this game to be indistinguishable from a human, then it could be said to "think." He explores various objections to the idea that machines can think, including theological, mathematical, and arguments from consciousness.


Turing addresses each objection, ultimately suggesting that machines can indeed be said to think if they can perform human-like tasks, especially those that involve reasoning, learning, and language.

Data Science #10 - The original principal component analysis (PCA) paper by Harold Hotelling (1935)

Season 1 · Episode 10

jeudi 12 septembre 2024 ‱ Duration 55:41

Hotelling, Harold. "Analysis of a complex of statistical variables into principal components." Journal of educational psychology 24.6 (1933): 417.


This seminal work by Harold Hotelling on PCA remains highly relevant to modern data science because PCA is still widely used for dimensionality reduction, feature extraction, and data visualization. The foundational concepts of eigenvalue decomposition and maximizing variance in orthogonal directions form the backbone of PCA, which is now automated through numerical methods such as Singular Value Decomposition (SVD). Modern PCA handles much larger datasets with advanced variants (e.g., Kernel PCA, Sparse PCA), but the core ideas from the paper—identifying and interpreting key components to reduce dimensionality while preserving the most important information—are still crucial in handling high-dimensional data efficiently today.


Related Shows Based on Content Similarities

Discover shows related to Data Science Decoded, based on actual content similarities. Explore podcasts with similar topics, themes, and formats, backed by real data.
There is no related content for this show.
© My Podcast Data