Released: 06/2019Duration: 1h 42m | .MP4 1280x720, 30 fps(r) | AAC, 48000 Hz, 2ch | 888 MBLevel: Intermediate | Genre: eLearning | Language: English
Artificial intelligence (AI) can have deeply embedded bias. It’s the job of data scientists and developers to ensure their algorithms are fair, transparent, and explainable. This responsibility is critically important when building models that may detee policy—or shape the course of people’s lives. In this course, award-winning software eeer Kesha Williams explains how to debias AI with SageMaker. She shows how to use SageMaker to create a predictive-policing machine-learning model that integrates Rekognition and AWS DeepLens, creating a c-fighting model that can “see” what’s happening in a live scene. By following the development process, you can learn what goes into making a model that doesn’t suffer from cultural prejudices. Kesha also discusses how to remove bias in training data, test a model for fairness, and build trust in AI by making models that are explainable.
TO MAC USERS: If RAR password doesn't work, use this archive program:
RAR Expander 0.8.5 Beta 4 and extract password protected files without error.
TO WIN USERS: If RAR password doesn't work, use this archive program:
Latest Winrar and extract password protected files without error.