Summary
In this chapter, we saw examples of prejudice in AI systems through several case studies. We saw how Cambridge Analytica developed an AI based on stolen personal data, how Amazon developed an AI that displayed sexist traits, and how the US justice system, to some extent, relies on AI that displays racist traits.
We built our own AI system that displayed some elements of prejudice and discussed how important it is to be aware of in-built biases, especially when using pre-trained models. We gained experience with the Python library spaCy and saw how word embeddings work. We verified that our sentiment analyzer worked on movie reviews, and then tested it further with some more words associated with prejudices.
In the next chapter, we will be studying the fundamentals of SQL and NoSQL databases by taking a practical approach. We will be learning and performing queries in MySQL, MongoDB, and Cassandra. Don't forget to consider the ethical considerations of any data that...