What holds and what breaks in the parametric ecologies analogy?
Holds:
Both ecologies and ML models have inputs and outputs. The former in the form of energy and metabolic pathways, the latter in the form of data.
Both ecologies and ML models exhibit exaptation. In ecologies, this translates to dynamics developed in a niche being reused in a different niche. In ML models, this translates to transfer learning and pre-training/fine-tuning practices.
Both ecologies and ML models exhibit gradualism. In ecologies, you wouldn't get complexity like the human eye if simpler eyes wouldn't have helped a bit at least in the past. In ML models, you wouldn't get certain behaviors if previous changes towards them wouldn't have led to improvements.
Both ecologies and ML models grow resilient from redundancy. In ecology, this translates to multiple energy pathways making species resilient against one food source disappearing. In ML models, this translates to Dropout/DropConnect-like regularizers encouraging what I'd call "internal ensembles" reaching a sensible consensus despite one pathway going wacky.
Both ecologies and ML models exhibit covergent dynamics. In ecology, this translates to convergent evolution (e.g. flight having been rediscovered multiple times on Earth because it's useful in certain niches). In ML models, this translates to instrumental convergence and something akin to natural abstractions (e.g. varied models trained on face recognition form anatomical abstractions independently).
Both ecologies and ML models lend themselves to co-evolution loops. In ecology, this translates to both cooperative (e.g. symbionts growing interdependent) and competitive (e.g. leopard and antelopes both getting faster). In ML models, this translates to training setups involving multiple models (e.g. generator and discriminator co-evolving in GANs).
Both ecologies and ML models lend themselves to optimization pressure. In ecology, this translates to evolutionary pressure to adapt to available niches. In ML models, this translates to optimization pressure exercised via gradient descent.
Both ecologies and ML models lack intelligent design. In ecology, this translates to Darwinism. In ML models, this translated to ML as opposed to rule-based/symbolic/non-ML methods.
Both ecologies and ML models can shape their inputs. In ecologies, this translates to biotic influences over abiotic stuffs. In ML models, this translates to learned embeddings which accumulate gradients despite being at the bottom of the "trophic chain" and not serving any other role other than to be consumed.
Partially holds:
Many ML models are optimized to fit fixed computational niches. In contrast, ecologies subtly influence each other over time, and hence drift away from their initial niches. However, some ML training paradigms exhibit non-fixed niches. Any regime involving multiple models is one such case (as they're mutually defining each other's niche). Additionally, individual models trained to reverse corruptions (e.g. denoising, diffusion, language modeling, masked language modeling, etc.) are occasionally re-defining their own niche by defining the stable shape of this external object (e.g. cascades, self-distillation).
It's unclear what the individual members of the trophic chain are. Individual neurons? Individual layers? Individual blocks (e.g. transformer encoder block containing multiple simpler layers)? Multiple choices are compatible with the analogy, especially if individuals are not internally influencing or feeding on themselves.
It's not perfectly clear to me how reinforcement learning would be framed in this input-output niche setting. Perhaps environment states are inputs, actions are outputs, but what about reward? This ecology isn't tasked with fitting a particular computational niche, because we don't know how that would look like. We're guiding by fitness directly, sort of?