Five jumbled up sentences, related to a topic, are given below. Four of them can be put together to form a coherent paragraph. Identify the odd one out and key in the number of the sentence as your answer:
- Machine learning models are prone to learning human-like biases from the training data that feeds these algorithms.
- Hate speech detection is part of the on-going effort against oppressive and abusive language on social media.
- The current automatic detection models miss out on something vital: context.
- It uses complex algorithms to flag racist or violent speech faster and better than human beings alone.
- For instance, algorithms struggle to determine if group identifiers like “gay” or “black” are used in offensive or prejudiced ways because they’re trained on imbalanced datasets with unusually high rates of hate speech.