I haven’t looked closely enough to know, but I recall medical image analytics being “better than human” well before the current AI/LLM rage. Like, those systems use machine learning, but in a more deterministic, more conventional algorithm sense. I think they are also less worried about false positives, because the algorithm is always assumed to be checked by a human physician, so my impression is that the real sense in which medical image analysis is ‘better’ is that it identifies smaller or more obscure defects that a human quickly scanning the image might overlook.
If you’re using a public mushroom identification AI as the only source for life-and-death choice, then false positives are a much bigger problem.
It’s like an iPad, but has to be plugged into the wall all the time. Rarely has a touch screen, so the only way to make it do stuff is with an external mouse and keyboard. Super useless.