Machine Learning and Artificial Intelligence Security Risk - Categorizing Attacks and Failure Modes

"softddl.org"
1-03-2022, 14:25
Rating:
0
0 vote


  • Machine Learning and Artificial Intelligence Security Risk - Categorizing Attacks and Failure Modes
    LinkedIn Learning
    Duration: 1h 11m | .MP4 1280x720, 30 fps(r) | AAC, 48000 Hz, 2ch | 713 MB
    Genre: eLearning | Language: English



Machine Learning and Artificial Intelligence Security Risk - Categorizing Attacks and Failure Modes
LinkedIn Learning
Duration: 1h 11m | .MP4 1280x720, 30 fps(r) | AAC, 48000 Hz, 2ch | 713 MB
Genre: eLearning | Language: English


From predicting medical outcomes to managing retirement funds, we put a lot of trust in machine learning (ML) and artificial intelligence (AI) technology, even though we know they are vulnerable to attacks, and that sometimes they can completely fail us. In this course, instructor Diana Kelley pulls real-world examples from the latest ML research and walks through ways that ML and AI can fail, providing pointers on how to design, build, and maintain resilient systems.
Learn about intentional failures caused by attacks and unintentional failures caused by design flaws and implementation issues. Security threats and privacy risks are serious, but with the right tools and preparation you can set yourself up to reduce them. Diana explains some of the most effective approaches and techniques for building robust and resilient ML, such as dataset hygiene, adversarial training, and access control to APIs.
Homepage
https://www.linkedin.com/learning/machine-learning-and-artificial-intelligence-security-risk-categorizing-attacks-and-failure-modes


Buy Premium From My Links To Get Resumable Support,Max Speed & Support Me


Links are Interchangeable - No Password - Single Extraction
 
Comments
The minimum comment length is 50 characters. comments are moderated
There are no comments yet. You can be the first!
Download free » Tutorials » Machine Learning and Artificial Intelligence Security Risk - Categorizing Attacks and Failure Modes
Copyright holders