Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    [Erscheinungsort nicht ermittelbar] : O'Reilly Media, Inc. | Boston, MA : Safari
    Language: English
    Pages: 1 online resource (1 video file, approximately 41 min.)
    Edition: 1st edition
    Keywords: Electronic videos ; local
    Abstract: When evaluating ML models, it can be difficult to tell the difference between what the models learned to generalize from training and what the models have simply memorized. And that difference can be crucial in some ML tasks, such as when ML models are trained using sensitive data. Recently, new techniques have emerged for differentially private training of ML models, including deep neural networks (DNNs), that used modified stochastic gradient descent to provide strong privacy guarantees for training data. Those techniques are now available, and they’re both practical and can be easy to use. This said, they come with their own set of hyperparameters that need to be tuned, and they necessarily make learning less sensitive to outlier data in ways that are likely to slightly reduce utility. Úlfar Erlingsson explores the basics of ML privacy, introduces differential privacy and why it’s considered a gold standard, explains the concrete use of ML privacy and the principled techniques behind it, and dives into intended and unintended memorization and how it differs from generalization. Prerequisite knowledge Experience using TensorFlow to train ML models A basic understanding of stochastic gradient descent What you'll learn Learn what it means to provide privacy guarantees for ML models and how such guarantees can be achieved in practice using TensorFlow Privacy
    Note: Online resource; Title from title screen (viewed February 28, 2020) , Mode of access: World Wide Web.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...