🦈 Are the rubber bumpers on yachts just shark absorbers?
Late to the Party 🎉 is about insights into real-world AI without the hype.
Hello internet,
it’s time I listen to you! I have some questions for you about the future of this newsletter! What is of value and what isn’t? But first some machine learning!
The Latest Fashion
- This SigGraph paper where you can drag your GAN in a neat UI is incredible
- Karpathy talked about how chatGPT was trained at MS Build
- Build scalable React-based WebApps in Python with solara
Got this from a friend? Subscribe here!
My Current Obsession
I’m back home, and jetlag has been kicking my butt! Any tips appreciated.
I made some nice updates to pythondeadlin.es, adding some extra information to each conference page. I added links to sponsor information and financial aid pages! It also pulls the Mastodon / Twitter feed for each conference to show the latest updates. See for example, Python Brasil:
Python Brasil
Countdown for the Call for Participation of this conference including conference dates and details.
Next week I will be giving an update on my SSI fellowship at the community call. Working on that presentation now, but I’m surprisingly nervous about it!
Oh, and I got my pictures from the shark dive I did on vacation! Such cute undersea puppies!
Thing I Like
I got obsessed with FPV drones and finally caved and got the EU license to operate UAVs and got a DJI Avata. Gonna go for my first flight in a few hours, probably. Wish me luck!
Machine Learning Insights
Last week I asked, What is the F1-score and when would you choose it over other metrics?, and here’s the gist of it:
The F1 score is a commonly used metric to evaluate the performance of a classification model. It combines two evaluation measures: precision and recall.
Precision is the ratio of correctly predicted positive instances to the total number of instances predicted as positive. It measures the model's ability to accurately identify positive cases. Recall, also known as sensitivity, is the ratio of correctly predicted positive instances to the total number of positive samples. It measures the model's ability to identify all positive cases.
The F1-score is the harmonic mean of precision and recall. It provides a single metric that balances both precision and recall. The harmonic mean emphasizes the lower value, so if either precision or recall is low, the F1 score will also be low.
When would you choose the F1 score over other metrics? The F1 score is instrumental in finding a balance between precision and recall. It is commonly used in binary classification problems where the classes are imbalanced, meaning one class has significantly more instances than the other. For example, in a weather forecasting model, if we are more concerned with accurately predicting rainy days (positive class) than non-rainy days (negative class), the F1 score can provide a better overall evaluation of the model's performance.
Choosing the F1 score over other metrics depends on the specific requirements of the problem. If precision and recall are equally important, the F1 score is suitable. However, suppose one of these measures is more critical. In that case, it may be more appropriate to prioritize precision (i.e. minimizing false positives) or recall (i.e. minimizing false negatives) based on the application.
Data Stories
Question of the Week
Tidbits from the Web
- We’re thinking about unemployment wrong
- John Oliver did a segment about Artificial Intelligence and it’s a fun watch
- Does a hoop with an attached weight hop?
Jesper Dramsch is the creator of PythonDeadlin.es, ML.recipes, data-science-gui.de and the Latent Space Community.
I laid out my ethics including my stance on sponsorships, in case you're interested!