❄️ Does snow flake it until it makes it?
Late to the Party 🎉 is about insights into real-world AI without the hype.
Hello internet,
a quick one today, as life has me entangled otherwise! So let’s look at some machine learning!
The Latest Fashion
- The New York Times sues OpenAI and Microsoft over copyright in chatGPT
- Here’s a nice Python Cheat Sheet
- Pynimate looks great for statistical animations in Python
Worried these links might be sponsored? Fret no more. They’re all organic, as per my ethics.
Machine Learning Insights
Last week I asked, How do you approach the challenge of explainability in complex machine learning models when presenting to domain experts?, and here’s the gist of it:
The challenge of explainability is crucial because it ensures the models are transparent, understandable, and trustworthy. Here are several strategies to approach this challenge effectively:
- Use of Interpretable Models Where Possible: Start with models that are inherently more interpretable, such as linear regression, decision trees, or logistic regression, if they can achieve the desired performance. This approach is straightforward and allows domain experts to understand how input features are related to the output. But this question is about complex models, after all, so just keep this in mind.
- Model Agnostic Methods: For more complex models where simpler models do not suffice, employ model agnostic methods like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations). These methods help explain the predictions of any machine learning model by approximating how changes in input feature values affect the output.
- Feature Importance: Highlighting feature importance can provide insights into which variables the model considers significant for predictions. This can be particularly useful in domain-specific contexts, like meteorology, where understanding which atmospheric parameters (e.g., temperature, humidity, pressure) most influence a weather forecasting model's predictions can be invaluable. Also, notice that permutation importance can take this beyond the feature importance of decision trees.
- Visualization Techniques: Use visual aids to make complex models more understandable. Techniques like partial dependence plots, decision tree visualizations, or neural network activation maps can make the model's decision process more accessible to non-experts.
- Case Studies and Examples: Presenting case studies or specific examples where the model's predictions align with or diverge from known outcomes can help domain experts understand its strengths and limitations. For instance, in meteorology, comparing model predictions with actual weather events can illustrate the model's accuracy and areas for improvement.
- Iterative Feedback Loop: Engage domain experts in an iterative feedback loop where their insights and questions guide further explanations and model adjustments. This collaborative approach ensures that the explanations are tailored to the domain-specific knowledge and concerns of the experts.
- Ethical and Societal Implications: Discuss the ethical and societal implications of the model's deployment, especially if the model's decisions have significant consequences. This includes considerations of fairness, bias, and transparency. These can be documented in a model card for example.
I have written about interpretability on ml.recipes with code examples!
Got this from a friend? Subscribe here!
Question of the Week
- Can AI be effectively used to predict and mitigate the impacts of soil erosion?
Post them on Mastodon and Tag me. I'd love to see what you come up with. Then I can include them in the next issue!
Tidbits from the Web
- This video was very funny
- It’s time for performance reviews at ECMWF so I was reminded of Chris Albon’s video: Don’t Do Invisible Work
- I had this song stuck in my head for years and finally found it again
Jesper Dramsch is the creator of PythonDeadlin.es, ML.recipes, data-science-gui.de and the Latent Space Community.
I laid out my ethics including my stance on sponsorships, in case you're interested!